Arduino Snore Stopper

Arduino Snore Stopper with Artificial Intelligence

Snoring is annoying, that’s clear. There are devices, pillows, apps, and many other remedies against it – but most of them probably don’t live up to their promises. If you snore and have promised to do something about it, you can combine your suffering with your hobby using this tutorial: You’ll build an Arduino snore stopper with artificial intelligence.

Tune in to our Google-powered podcast to get to know this snore stopper project:

We’ll use an Arduino Nano 33 BLE Sense and a vibration motor. The microcontroller runs an AI model that detects whether you’re snoring using the Arduino’s microphone. If that’s the case, the motor starts and (hopefully) wakes you up.

Building the Snore Stopper

Besides the Arduino Nano 33 BLE Sense, you only need a vibration motor and a connection cable for a 9V battery. The motor should be strong enough to wake you up – of course, it also depends on where you place the snore stopper. If you use an armband, for example, the vibration motor can lie directly on your upper arm or above your wrist directly on your skin. The band that comes with the Ring Fit game for the Nintendo Switch is suitable for this purpose.

Instead of vibration, you can of course use other means. A piezo buzzer would certainly wake you from your sleep – unfortunately, it would also wake up your bed partner. In the following, we’ll stick with the vibration motor. The setup then looks like this:

Arduino Snore Stopper Setup

If you have an Arduino with header pins, you can adapt the cables of the two components accordingly. For this, solder two cables with a socket to each, which you can then plug onto the Arduino. Here’s the setup as a sketch:

Arduino Snore Stopper Wiring

___STEADY_PAYWALL___

The Appropriate AI Model and the Sketch

For the snore stopper to recognize your snoring, you need an AI model that you use in your Arduino sketch to make the snore stopper work.

This tutorial is based on this project on GitHub – with a few adjustments. The maker metanav has already done a lot of preliminary work there that you can reuse. Download the project as a ZIP file on GitHub or directly here.

Download the snore stopper project from GitHub

Then unzip the ZIP file and open the sketch tflite_micro_snoring_detection.ino, which you can find in the folder Snoring-Guardian-main > snoring_detection_inferencing > examples > tflite_micro_snoring_detection.

Next, include the supplied library. It contains the AI model you will use. Open the menu item Sketch > Include Library > Add ZIP Library in the Arduino IDE and select the file Snoring_detection_inferencing.zip.

Installing the Arduino NANO and Another Library

If you haven’t made the Arduino Nano 33 BLE Sense available in the Arduino IDE yet, do it quickly. To do this, open the Board Manager in the menu on the left, search for Arduino Nano BLE Sense and install the current version of Arduino Mbed OS Nano Boards.

Install the Arduino Nano BLE Sense in the IDE

You also need the RingBuf library, which you can install via the Library Manager. But be careful: The library of the same name doesn’t work on the Arduino Nano. Instead, install the RingBuffer library by Jean-Luc – Locoduino:

Install the RingBuffer Library

For a test, select your board in the Arduino IDE and click on the checkmark at the top left to verify the sketch. The compilation takes some time, but if the correct libraries have been installed or included and the connection to the Arduino Nano is established, it should be completed successfully:

Arduino IDE Output

Adjustments in the Sketch

You can upload the sketch for the snore stopper directly to your Arduino and use it, but depending on which vibration motor (or other component) you use, you need to adjust a few things.

In the sketch, the function void run_vibration() takes care of starting the motor. In the example sketch, it looks like this:

void run_vibration()
{
  if (alert)
  {
    is_motor_running = true;

    for (int i = 0; i < 2; i++)
    {
      analogWrite(vibratorPin, 30);
      delay(1000);
      analogWrite(vibratorPin, 0);
      delay(1500);
    }
    
    is_motor_running = false;
  } else {
    if (is_motor_running)
    {
      analogWrite(vibratorPin, 0);
    }
  }
  yield();
}

Here, the motor is started 3 times for one second each, with a pause of 1.5 seconds in between. For this, analogWrite() is used with a value of 30. However, the vibration motor I use only understands On and Off. If this is also the case for you, change the relevant part as follows:

for (int i = 0; i < 2; i++) {
  digitalWrite(vibratorPin, HIGH);
  delay(5000);
  digitalWrite(vibratorPin, LOW);
  delay(1000);
}

Here you use digitalWrite() and send either a HIGH or LOW to the motor. The running and pause times have also been changed – the five seconds here are aimed more at snorers with a deep sleep.

And one more adjustment: If you have connected your vibration motor to pin D2 as in the sketch above, change the corresponding line in the sketch:

int vibratorPin = 2;

Now upload the sketch to your Arduino Nano – you’ll find it at the very end of this tutorial.

Testing the Snore Stopper

Now it’s time – as soon as the sketch has successfully landed on your Arduino, open the Serial Monitor in the IDE. There you can see the predictions that the AI model makes based on the sounds it receives via the Arduino’s built-in microphone:

Snore Stopper Output in the Serial Monitor

In the case marked in red above, this was a normal background noise with a probability of 99.219%. The probability that someone snored here was only 0.781%.

It’s probably a bit embarrassing, but now imitate typical snoring sounds several times in a row. You’ll see that the Arduino’s internal LED lights up and the output in the Serial Monitor changes accordingly. As soon as snoring has been detected several times, the vibration motor also starts and vibrates in the rhythm you defined in the run_vibration() function.

Next, it’s time for some “real” tests at night. Since you can also power your Arduino Nano with a 9V battery, nothing stands in the way of your attempts in bed. You’ll probably have to try several possibilities of positioning the motor to wake up from its vibrations or to allow the Arduino’s microphone to record you snoring flawlessly. If the latter is not the case, false alarms may occur.

And of course, there’s no guarantee that your new snore stopper will lead to quiet nights at all…

Develop Your Own AI Model

Since you’ve been using a pre-made model so far, it’s possible that it doesn’t work optimally for you. After all, everyone snores differently – and the snoring sounds used for training the model may differ greatly from your own.

So if you want to go a step further, it’s no problem – just a bit of work. On Pollux Labs, you’ll find tutorials on how to use the Edge Impulse service to develop a personal AI model. There you’ll learn how to connect your Arduino Nano 33 BLE Sense with Edge Impulse, collect data with it, and train your own model. The last-mentioned tutorial is about motion data, but training with audio works similarly.

Speaking of audio, your smartphone is already suitable for collecting enough snoring sounds. Simply start a sound recording and let the smartphone lie next to you overnight. You can then process the corresponding passages in the audio file in Edge Impulse.

And now have fun and success experimenting with your snore stopper!

The Complete Sketch

Here’s the entire sketch with the mentioned adjustments:

// If your target is limited in memory remove this macro to save 10K RAM
#define EIDSP_QUANTIZE_FILTERBANK   0

/**
   Define the number of slices per model window. E.g. a model window of 1000 ms
   with slices per model window set to 4. Results in a slice size of 250 ms.
   For more info: https://docs.edgeimpulse.com/docs/continuous-audio-sampling
*/
#define EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW 3

/* Includes ---------------------------------------------------------------- */
#include <PDM.h>
#include <Scheduler.h>
#include <RingBuf.h>
#include <snore_detection_inferencing.h>

/** Audio buffers, pointers and selectors */
typedef struct {
  signed short *buffers[2];
  unsigned char buf_select;
  unsigned char buf_ready;
  unsigned int buf_count;
  unsigned int n_samples;
} inference_t;

static inference_t inference;
static bool record_ready = false;
static signed short *sampleBuffer;
static bool debug_nn = false; // Set this to true to see e.g. features generated from the raw signal
static int print_results = -(EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW);

bool alert = false;

RingBuf<uint8_t, 10> last_ten_predictions;
int greenLED = 23;
int vibratorPin = D2;   // Vibration motor connected to D2 PWM pin
bool is_motor_running = false;

void run_vibration() {
  if (alert) {
    is_motor_running = true;

    for (int i = 0; i < 2; i++) {
      digitalWrite(vibratorPin, HIGH);
      delay(5000);
      digitalWrite(vibratorPin, LOW);
      delay(1000);
    }

    is_motor_running = false;
  } else {
    if (is_motor_running) {
      analogWrite(vibratorPin, LOW);
    }
  }
  yield();
}



/**
   @brief      Printf function uses vsnprintf and output using Arduino Serial

   @param[in]  format     Variable argument list
*/
void ei_printf(const char *format, ...) {
  static char print_buf[1024] = { 0 };

  va_list args;
  va_start(args, format);
  int r = vsnprintf(print_buf, sizeof(print_buf), format, args);
  va_end(args);

  if (r > 0) {
    Serial.write(print_buf);
  }
}

/**
   @brief      PDM buffer full callback
               Get data and call audio thread callback
*/
static void pdm_data_ready_inference_callback(void)
{
  int bytesAvailable = PDM.available();

  // read into the sample buffer
  int bytesRead = PDM.read((char *)&sampleBuffer[0], bytesAvailable);

  if (record_ready == true) {
    for (int i = 0; i<bytesRead >> 1; i++) {
      inference.buffers[inference.buf_select][inference.buf_count++] = sampleBuffer[i];

      if (inference.buf_count >= inference.n_samples) {
        inference.buf_select ^= 1;
        inference.buf_count = 0;
        inference.buf_ready = 1;
      }
    }
  }
}

/**
   @brief      Init inferencing struct and setup/start PDM

   @param[in]  n_samples  The n samples

   @return     { description_of_the_return_value }
*/
static bool microphone_inference_start(uint32_t n_samples)
{
  inference.buffers[0] = (signed short *)malloc(n_samples * sizeof(signed short));

  if (inference.buffers[0] == NULL) {
    return false;
  }

  inference.buffers[1] = (signed short *)malloc(n_samples * sizeof(signed short));

  if (inference.buffers[0] == NULL) {
    free(inference.buffers[0]);
    return false;
  }

  sampleBuffer = (signed short *)malloc((n_samples >> 1) * sizeof(signed short));

  if (sampleBuffer == NULL) {
    free(inference.buffers[0]);
    free(inference.buffers[1]);
    return false;
  }

  inference.buf_select = 0;
  inference.buf_count = 0;
  inference.n_samples = n_samples;
  inference.buf_ready = 0;

  // configure the data receive callback
  PDM.onReceive(&pdm_data_ready_inference_callback);

  PDM.setBufferSize((n_samples >> 1) * sizeof(int16_t));

  // initialize PDM with:
  // - one channel (mono mode)
  // - a 16 kHz sample rate
  if (!PDM.begin(1, EI_CLASSIFIER_FREQUENCY)) {
    ei_printf("Failed to start PDM!");
  }

  // set the gain, defaults to 20
  PDM.setGain(127);

  record_ready = true;

  return true;
}

/**
   @brief      Wait on new data

   @return     True when finished
*/
static bool microphone_inference_record(void)
{
  bool ret = true;

  if (inference.buf_ready == 1) {
    ei_printf(
      "Error sample buffer overrun. Decrease the number of slices per model window "
      "(EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW)\n");
    ret = false;
  }

  while (inference.buf_ready == 0) {
    delay(1);
  }

  inference.buf_ready = 0;

  return ret;
}

/**
   Get raw audio signal data
*/
static int microphone_audio_signal_get_data(size_t offset, size_t length, float * out_ptr)
{
  numpy::int16_to_float(&inference.buffers[inference.buf_select ^ 1][offset], out_ptr, length);

  return 0;
}

/**
   @brief      Stop PDM and release buffers
*/
static void microphone_inference_end(void)
{
  PDM.end();
  free(inference.buffers[0]);
  free(inference.buffers[1]);
  free(sampleBuffer);
}


void setup()
{
  Serial.begin(115200);

  pinMode(greenLED, OUTPUT);
  pinMode(greenLED, LOW); 
  pinMode(vibratorPin, OUTPUT);  // sets the pin as output

  // summary of inferencing settings (from model_metadata.h)
  ei_printf("Inferencing settings:\n");
  ei_printf("\tInterval: %.2f ms.\n", (float)EI_CLASSIFIER_INTERVAL_MS);
  ei_printf("\tFrame size: %d\n", EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE);
  ei_printf("\tSample length: %d ms.\n", EI_CLASSIFIER_RAW_SAMPLE_COUNT / 16);
  ei_printf("\tNo. of classes: %d\n", sizeof(ei_classifier_inferencing_categories) /
            sizeof(ei_classifier_inferencing_categories[0]));

  run_classifier_init();
  if (microphone_inference_start(EI_CLASSIFIER_SLICE_SIZE) == false) {
    ei_printf("ERR: Failed to setup audio sampling\r\n");
    return;
  }

  Scheduler.startLoop(run_vibration);
}

void loop()
{

  bool m = microphone_inference_record();

  if (!m) {
    ei_printf("ERR: Failed to record audio...\n");
    return;
  }

  signal_t signal;
  signal.total_length = EI_CLASSIFIER_SLICE_SIZE;
  signal.get_data = &microphone_audio_signal_get_data;
  ei_impulse_result_t result = {0};

  EI_IMPULSE_ERROR r = run_classifier_continuous(&signal, &result, debug_nn);
  if (r != EI_IMPULSE_OK) {
    ei_printf("ERR: Failed to run classifier (%d)\n", r);
    return;
  }

  if (++print_results >= (EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW)) {
    // print the predictions
    ei_printf("Predictions ");
    ei_printf("(DSP: %d ms., Classification: %d ms., Anomaly: %d ms.)",
              result.timing.dsp, result.timing.classification, result.timing.anomaly);
    ei_printf(": \n");

    for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
      ei_printf("    %s: %.5f\n", result.classification[ix].label,
                result.classification[ix].value);

      if (ix == 1 && !is_motor_running && result.classification[ix].value > 0.9) {
        if (last_ten_predictions.isFull()) {
          uint8_t k;
          last_ten_predictions.pop(k);
        }

        last_ten_predictions.push(ix);

        uint8_t count = 0;

        for (uint8_t j = 0; j < last_ten_predictions.size(); j++) {
          count += last_ten_predictions[j];
          //ei_printf("%d, ", last_ten_predictions[j]);
        }
        //ei_printf("\n");
        ei_printf("Snoring\n");
        pinMode(greenLED, HIGH); 
        if (count >= 5) {
          ei_printf("Trigger vibration motor\n");
          alert = true;
        }
      }  else {
        ei_printf("Noise\n");
        pinMode(greenLED, LOW); 
        alert = false;
      }

      print_results = 0;
    }
  }
}


#if !defined(EI_CLASSIFIER_SENSOR) || EI_CLASSIFIER_SENSOR != EI_CLASSIFIER_SENSOR_MICROPHONE
#error "Invalid model for current sensor."
#endif
We don't track you. Enjoy your cookies while making awsome electronic projects!