We present CoreBFRB, a pre-trained gesture recognition model implemented on Keen2 and KeenLite devices. CoreBFRB improves real-time detection of body-focused repetitive behaviors (BFRBs) by reducing false positives through research-informed presets, while maintaining optional user configurability. This article provides a detailed account of the motivation, methodology, and research pipeline underlying CoreBFRB.

Simply select your behavior, then start using Keen
Since 2016, Keen devices have supported users through a one-time training paradigm , where each user intentionally performs their behavior (hair-pulling, skin-picking, and nail-biting) while motion data are collected. The resulting dataset forms the recognition model used for real-time detection. While conceptually elegant and customizable, this approach exhibits sensitivity to data quality: even minor confounds (e.g., posture shifts, unrelated wrist movements) during training can degrade performance. Poor training datasets often yield models with excessive false positives, which erode user trust and may result in device abandonment.
Informed by several years of customer support interactions and quantitative feedback, HabitAware identified the need for a more robust, generalizable approach. CoreBFRB was designed to leverage population-level training data to supply pre-defined behavior models, mitigating the risk of user error in the training phase while retaining adaptability through sensitivity and orientation parameters.
We encourage you to give this a try for yourself:
- Get the Keen2 bracelet at habitaware.com
- Try the the demo for Apple Watch, "KeenLite" on the App Store
- Update your iOS app on the App Store
- Update your Android app on the Google Play Store
Built on real Keen usage
Extensive analysis of aggregated, anonymous training data guided the development of CoreBFRB. Results from thousands of training sessions revealed:
- Behavior prevalence: 68% of Keen users train for hair pulling, 22% for skin picking, and 10% for other behaviors. Notably, 81% of all configurations target head-related behaviors (Figure 1).
- Motion sensitivity adjustments: On a 0–15 scale, 54% of users retain the default setting (8). Approximately 30% lower sensitivity to reduce false positives, while 16% increase sensitivity to capture more true positives (Figure 2).
- Orientation tolerance: The majority favor a ‘wide’ tolerance, but 34% narrow the tolerance to reduce false detections (Figure 3).
These empirical distributions highlight the trade-off space between true positive rate (TPR) and false positive rate (FPR), which guided the calibration of CoreBFRB’s default parameters.
Figure 1. Distribution of user-selected behaviors (left) and target body areas (right).
Figure 2. Distribution of motion sensitivity adjustments across users.
CoreBFRB detection framework
BFRB recognition is constrained by two primary signal categories:
- Macroscopic approach motion – wrist trajectories toward canonical action sites (scalp, brows, mouth).
- Micromotions and orientation states – fine-grained wrist orientations and repetitive micro-dynamics during the behavior.
CoreBFRB fuses both signal families to create behavior-specific detection profiles. Upon user selection of a target behavior, the model activates parameter sets aligned with the empirically observed orientation-motion clusters for that behavior (Figure 4). Adjustable elements (hand-raise filters, motion thresholds, and orientation tolerance) allow user-specific refinement without the burden of initial free-form training.

Figure 4. The simulated position target for eyebrows (yellow) as the Side‑of‑Head area is selected.
Research and development pipeline
HabitAware’s research program integrates engineering rigor with translational applicability. The workflow encompasses:
- Experimental design – structured collection of labeled BFRB and non-BFRB motion episodes.
- High-frequency data capture – wrist sensor data sampled at sufficient rates to preserve fine-scale signal features.
- Feature engineering and algorithm development – extraction of orientation and motion features relevant to BFRB detection.
- Simulation and validation – stress testing of candidate models across aggregated datasets.
- Embedded deployment validation – model performance assessed on ARM microcontrollers, Apple Watch, and WearOS.
- Beta testing feedback – structured reporting from feasibility and clinical trial participants to refine thresholds and usability.
This closed-loop process (Figure 5) ensures coherence from experimental design to real-world deployment.
Figure 5. HabitAware’s research and development workflow.
Custom data collection infrastructure
While existing research wearables (MbientLab, Empatica, ActiGraph) and consumer smartwatches provide rapid prototyping capabilities, they impose scalability limitations for the specialized task of BFRB data collection. Supported by the NSF Phase IIB award, HabitAware engineered a bespoke ecosystem comprising:
- A custom wearable optimized for subtle, low-energy BFRB movements.
- Video capture integration for synchronized ground truth labeling.
- Cloud-based infrastructure for secure data transfer, storage, and annotation.
- Software interfaces for study partners to manage data independently.
This infrastructure enables multi-modal, multi-context data acquisition, including supervised in-lab, unsupervised naturalistic, and remotely supervised conditions (Figure 6–8).



From analysis to deployment
Collected data are transformed into orientation-mapped and motion-feature representations. Wrist orientation is modeled in 3D space, projected onto 2D contour plots to visualize high-density zones of BFRB-related poses (Figure 9). This facilitates:
- Identification of orientation clusters most predictive of true behaviors.
- Estimation of overlap with normal daily motions (potential false positives).
Motion analysis isolates features such as hand-raise profiles, which provide temporal segmentation of BFRB events (Figure 10). Simulation across diverse datasets yields population-level thresholds that balance sensitivity and specificity (Figure 11).


Figure 10. One of several motion‑feature signals during hair pulling, with model stages.

Cross-platform deployment remains technically challenging, given differing computational capacities of microcontrollers vs. consumer smartwatches. Current NSF-supported efforts aim to automate cross-compilation and validation pipelines, ensuring consistent detection fidelity across devices.
Conclusions and future directions
CoreBFRB represents a shift from individualized, error-prone training to robust pre-trained recognition models grounded in aggregated user data and controlled experimentation. By reducing false positives and streamlining setup, the model enhances user engagement and long-term efficacy.
Future work will extend the CoreBFRB framework toward additional low-energy, infrequent behavior domains within mental health technology. Our data collection ecosystem and model development pipeline provide the foundation for these expansions.
Acknowledgements
We acknowledge our research partners at Marquette University, Kent State University, and the University of Minnesota, as well as HabitAware’s internal team, Keen advocates, and investors. This research is supported by NSF Grant #2026173 and the National Institute of Mental Health (NIH) under Award R43MH114773. Content herein reflects the authors’ views and not necessarily those of NIH or NSF.