Smart Button, Bio-enabled wearable devices


NSF 1525235
                    


Initial Research investigation

Human voice can be leveraged as natural acoustic events for ranging and localization. However, people might not always talk while interacting with each other, we can hardly control the generation of these events. Fortunately,our previous work [112, 114] has looked into this issue and concluded that it is feasible to determine not only the distance between but also locations of in-field entities with naturally generated events. We attach one or multiple smart buttons with miniature wide-band acoustic sensor/microphone (100 - 10,000 Hz) on each person who participate in group conversation. In the room, we would optionally utilize other acoustic sources such as speakers for the purpose of active ranging and bio-actuation. The whole system can work as either a passive sonar when people actively talk, or an active sonar when artificial acoustic events are injected into the system.

TDOA (i.e., time difference of arrival) will be employed as the principle mechanism for distance measurement and positioning. Because of the ultra-tight resource constraints, the state-ofthe-art TDOA solutions [73, 71, 52] can hardly be directly implemented in our smart button system. For instance, the classical approach of simultaneous radio and sound transmission proposed in [73] is not possible here because of practical issues including (i) acoustic emitter is too heavy and energy-consuming; and (ii) radio transmission may not be picked up by the perspective receiver with duty-cycled radio. To address various challenges, we propose a new acoustic-driven synchronization-free approach for bio-enabled ranging and positioning, an example of which is depicted in Figure where Ref.R stands for a sensor unit/node installed on room with known coordinates. Basic mechanisms and principles can be summarized as follows:

  1. Radio for purpose of ranging is enabled only after positive acoustic signal detection. When person A makes a sound, denoted as event  at t0 in Figure, its own acoustic sensor Mic.A and other sensors within the range (i.e., Mic.B of person B and Mic.R of sensor unit R) detect this event at different time instances depending on their distances to person A. For a person, its radio transceiver gets turned on upon the detection of the sound and sends out a short radio packet with its ID and time interval between the C-7 acoustic detection and radio transmission, e.g., [A,(t2 −t1)] in Figure. Reference buttons (with known pairwise distances) record their time of acoustic signal detections, messages received from smart buttons (e.g., Buttons A and B), and the arrival time of these messages (e.g., t2 ′ and t4 ′ in the Figure).

  2. Event source localization in space and time. Acoustic detections at reference nodes are aggregated and processed to estimate both the source location and occurrence time instance of the acoustic event based on the differences of time arrivals at different reference sensor nodes with known coordinates. After this step, the position of the person making the sound can be identified.

  3. Bio-enabled ranging. With results obtained from the previous step and timestamped radio messages received from neighboring smart buttons, the physical distances of these buttons with respect to the event source (the person that made the sound) can be calculated and calibrated.



              

Open Research Problems

In practice, the proposed method may face multiple challenges. We summarize potential and generic problems to be addressed in this part of research as follows:

  1. NLOS (non-line-of-sight) Detection. Acoustic signals are directional and can be reflected and deflected in the space, therefore non-line-of-sight detections exist in practical systems. This problem can also be introduced by an inappropriate mechanical design of the smart buttons. NLOS detection results are detrimental to both event source localization and bio-enabled ranging by introducing singular values during matrix decomposition. We shall address this effectiveness issue by carefully developing the system with extensive experimentation and calibration. Moreover, we shall investigate pre-processing algorithms that can effectively identify and remove sensing samples that are subject to NLOS effects..

  2. Mixed Source Separation. When several people talk at the same time, acoustic ranging becomes much more challenging. In this part of research, we plan to address this scalability problem by (i) developing an acoustic signal training system which records, analyzes, extracts, and characterizes features of the sound made by each person under study, (ii) developing a multi-channel software filter that can effectively separate sounds from different acoustic source based on their signatures obtained with the training system..

  3. Unbalanced Performance Prevention. In socialization, individuals have diversified roles and activities. Some person usually talks much more than others, which brings about unbalanced localization performance among individuals. On the other hand, a conversation group could occasionally be quiet for a while, and in this case, the acoustic ranging system temporarily loses track of their interactions. To address these issues, we will opportunistically exploit ambient sound generated by any types of acoustic emitters (i.e., speakers) in the room to restore desired localization performance. Our previous work [115] has shown the potential to localize stationary nodes using natural ambient uncontrolled acoustic events..

  4. Reactive Ranging and Localization. Like proximity detection, bio-enabled ranging does not need to be conducted continuously because a person’s location statuses do not change very quickly. Therefore, processing the sampled acoustic events in an on-demand manner is crucial for improving the overall energy efficiency of this function. We also believe that the design of reactive ranging and localization heavily depend on both the bio-behavior research results about how the persons interact with each other and the implementation constraints at the system level. In this part of research, we plan to leverage our interdisciplinary research strength to investigate a set of techniques to promote real-time reactive operations.

Bio-enabled Energy Efficient Sensing and Collection

In this work, we propose a smart button design with an ultimate target weight of 2 gram (ideally 1.2 g) that can be attached non-intrusively on the cloth. Due to the weight constraint, heavy batteries are not a viable option. Moreover, the need for non-intrusive monitoring prevents us from changing batteries frequently. Therefore, energy efficiency is one of the unique design objectives of the smart button platform. In our current preliminary design, we employ Ember EM357 SoC [51], which consists of an RF transceiver with the baseband modem, a hardwired MAC, and an embedded 32-bit ARM Cortex-M3 microcontroller with internal RAM (12 kB) and flash (192 kB) memory. It has the smallest form factor (7 mm x 7 mm x 0.85 mm) in its class and it weighs less than 0.2 gram with a 48L QFN package. We will replace the current RF4 PCB board with a much lighter soft PCB film weight at 0.1 grams. For energy supply, we chose ultra-low-weight lithium ion batteries from PowerStream [42]. The model PGEB201212 provide 10 mAh total energy capacity at the weight of 0.45 grams. Although the total weight budget (including additional sensors) is currently below our target weight, it is not a trivial task to integrate these components into a robust button enclosure.
Preliminary Results: To confirm that our proposed SoC is appropriate for the study, we finished a set of preliminary experiments on energy consumption, using the Breakout board, which is designed for accurate current measurements flowing into EM357 (first figure below). The oscilloscope used for measuring was Tektronix DPO 4054, and the power was supplied via USB. For stable measurement, at least 104 samples were used for each test.
EM357 can operate in two modes. (i) Active mode is where all components are turned on and ready to use, while (ii) sleep mode is where nearly all devices on EM357 are turned off, and only the minimum current is left to sustain what is in its memory. In our experiment, active current (or equivalently, active power consumption) is found to be 8.52mA as shown in the second figure. , whereas sleep current (or equivalently, sleep power consumption) is measured to be an extremely small value of 0.47 µ A as shown in the second figure. Please note that there is a great difference in the two currents; active current is 1.8×104 times greater than the sleep current. This indicates that EM357 can be used effectively for long term operation by letting it sleep for most of the time. Dependent on programming, the EM357 can change its state from sleep mode to active mode and vice versa. the third figure shows that transition between active states (8.52mA) and sleep states (0.47µA). We note that EM357 radio takes approximately 100µs to finish the transition, a nice feature allowing us to efficiently save energy by duty cycling the EM357 nodes.