산업 제조
산업용 사물 인터넷 | 산업자재 | 장비 유지 보수 및 수리 | 산업 프로그래밍 |
home  MfgRobots >> 산업 제조 >  >> Manufacturing Technology >> 제조공정

Spectrino:TinyML Arduino 및 IoT 기반 터치 프리 솔루션

구성품 및 소모품

Arduino Nano 33 BLE 감지
× 1
Espressif ESP8266 ESP-01
× 1
Arducam Mini 2MP 플러스
× 1
MAX7219 디스플레이
이 디스플레이는 4 in 1, 32*8 디스플레이로 권장됩니다.
× 1
부저
× 1
SG90 마이크로 서보 모터
이것은 일반 서보입니다. 저는 MG995 또는 MG959를 사용할 것입니다.
× 2
Adafruit Micro-Lipo 충전기
× 1
Adafruit 리튬 이온 폴리머 배터리
× 1
Arduino MKR WiFi 1010
× 1
RGB 확산 공통 음극
× 1
초음파 센서 - HC-SR04(일반)
× 1

필요한 도구 및 기계

10 Pc. 점퍼 와이어 키트, 길이 5cm

앱 및 온라인 서비스

Arduino 웹 편집기
ThingSpeak API
Edge Impulse Studio
텐서플로우

이 프로젝트 정보

개요

팬데믹은 사회적 상호작용에 대한 제약인 거리를 도입했습니다. 이러한 위험 요소를 고려할 때 전 세계 국가는 다양한 수준의 검역을 받았으며 많은 쇼핑몰이 소비자 수가 크게 감소되어 문을 닫아야 했습니다. . 이로 인해 매우 높은 수준의 해고가 발생했습니다. 몰 직원 , 그리고 유사한 비즈니스 소유자의 경제적 어려움 .

이로 인해 COVID-19로 인해 미국에서 2020년 7월 2일 현재 상대적 저소득(연간 소득 $40,000 미만) 실직 수준이 발생했습니다. 숙박 및 식품 서비스, 소매 무역 및 엔터테인먼트는 총 4,000,000개의 예상 일자리 손실에 해당합니다.

식품, 소비자 및 소매업이 가장 많은 직원이 해고된(추정 20,000개 이상의 일자리에 해당) 상위 6개 산업에 포함되는 다양한 산업이 직면한 경제적 문제입니다. 이 연구에서 고려한 미국 이외의 모든 지역을 포함하는 "국제"를 가정하면 총 해고 수는 약 103,000개 이상의 일자리에 도달합니다.

합병증

주어진 데이터를 고려하여 팀은 여전히 ​​열려 있는 쇼핑몰에 대해 여러 상점의 인구 밀도와 관련하여 위험 불확실성이 있다는 주요 과제를 결정했습니다. 이 외에도 많은 곳에서 마스크를 착용하고 접촉을 피하는 것이 의무화되어 있지만 여전히 위반 사항이 있습니다. 이것은 원격으로 일할 여력이 없는 쇼핑몰 직원과 사업주가 이 새로운 일반 작업 공간을 안전하게 탐색하는 것을 더 어렵게 만듭니다. 이것은 다음과 같은 질문을 던집니다. 언제라도 쇼핑몰의 특정 상점에 들어가도 안전하다는 적절한 수준의 확실성을 어떻게 보장할 수 있습니까?

우리 모두는 지금 만연한 COVID-19 전염병과 싸우고 있습니다. 또한 지금 우리는 더 많은 안전 조치로 만연한 상황에 적응해야 하는 상황에 처해 있습니다. 바이러스 감염을 피하기 위한 더 많은 안전 조치로 삶이 정상으로 돌아오는 동안, 공공 장소와 혼잡한 장소에서 안전을 추가하는 도시에서도 널리 퍼져 있습니다. 그러나 우리가 안전 조치를 깨고 안전하지 않은 요소와 상호 작용하여 도움이 필요한 사람을 만나야 하는 상황이 많이 있었습니다. 여기에서 프로젝트는 접촉 상호 작용 또는 접촉을 통한 COVID-19 확산 방지를 다루고 있습니다.

접촉을 통해 바이러스가 퍼지는 것을 보기 위한 실험이 수행되었습니다. 결과는 다음과 같이 평가되었습니다.

따라서 저는 주택과 사회에서 가장 일반적으로 사용되는 장치를 자동화하여 장치와의 핸즈프리 통신을 보장하기로 결정했습니다.

이 프로토타입을 만들 때 다음 솔루션이 개발되었습니다.

<울>
  • Arduino33 BLE Sense에 배포된 TinyML을 사용한 스마트 인터콤 시스템: 다음은 Computer Vision과 TinyML 모델을 사용하여 문 밖에 있는 사람을 감지하고 벨을 건드리지 않고 벨을 울리는 터치 프리 솔루션입니다.
  • IoT를 이용한 온도 모니터링 시스템 및 경보 시스템 : 팬데믹(세계적 대유행) 속에서 안전은 중요한 측면이 되었습니다. 따라서 온도 모니터링 시스템은 IoT Thingspeak 대시보드를 활용하여 사람이 입장하는 동안 감지하고 체온을 측정합니다. 이 온도는 적시 동향 및 데이터 분석을 위해 IoT 대시보드에 표시됩니다. 온도 이상 감지 시 경보가 발생하여 2차 검사를 받습니다.
  • Arduino BLE 33 센스에서 음성 인식 TinyML 모델을 사용하는 터치 프리 엘리베이터 시스템: 우리는 하루에도 몇 번씩 엘리베이터를 타고 건물을 오르내리는데, 출퇴근하는 사람들이 만진 오염된 스위치를 만지는 것이 늘 두렵습니다. 따라서 이 음성 인식 모델은 사람이 오르거나 내리기를 원하는 때를 식별하고 유사하게 해당 작업을 수행합니다.
  • TinyML 및 IoT 모니터링 시스템 기반 MaskModel 감지 시스템 : 이 방법은 Arduino BLE 33 센스에 배포된 컴퓨터 비전 모델을 사용하여 사람이 마스크를 착용했는지 여부를 감지하고 마찬가지로 이 데이터를 IoT 대시보드로 전송하여 안전하지 않은 시간에 모니터링하고 제한을 부과합니다.
  • TinyML, IoT 및 컴퓨터 비전을 사용하여 슈퍼마켓이나 쇼핑몰에서 Smart Queue 모니터링 및 시스템 구축: 이 모델은 슈퍼마켓 밖에 서 있는 사람을 감지하고 슈퍼마켓에 한 번에 50명의 입장을 허용합니다. 안에 있는 사람들이 쇼핑을 끝내고 다음 50명의 사람들이 다시 쇼핑몰에 들어갈 수 있도록 15분을 더 기다려야 합니다. 이것은 Arduino 33 BLE 센스에 배포된 컴퓨터 비전과 TinyML을 사용하여 수행됩니다. 이 데이터는 실시간 데이터를 추적할 수 있는 IoT 대시보드에 투영됩니다.
  • 상가 통로의 개인 모니터링 시스템 및 오염 기반 위생 시스템 :이 솔루션은 쇼핑몰이나 슈퍼마켓의 지역에 배치된 사람 감지 알고리즘을 사용하고 지역의 사람 오염이 임계값을 초과하면 자외선으로 해당 지역을 자체 소독합니다. 소독 기간 및 시간은 분석을 위해 슈퍼마켓 직원을 위해 IoT 대시보드에 투영됩니다.
  • 설정(프로젝트 요구 사항):

    필요한 하드웨어:

    1)Arduino33BLE 센스

    Arduino Nano 33 BLE Sense는 기존 Arduino Nano의 진화 버전이지만 훨씬 더 강력한 프로세서인 Nordic Semiconductors의 nRF52840, 64MHz에서 실행되는 32비트 ARM® Cortex™-M4 CPU를 갖추고 있습니다. 이렇게 하면 Arduino Uno(1MB의 프로그램 메모리, 32배 더 큼) 및 훨씬 더 많은 변수(RAM이 128배 더 큼)보다 더 큰 프로그램을 만들 수 있습니다. 메인 프로세서에는 NFC를 통한 Bluetooth® 페어링 및 초저전력 소비 모드와 같은 놀라운 기능이 포함되어 있습니다.

    임베디드 인공 지능

    인상적인 센서 선택 외에도 이 보드의 주요 기능은 TinyML을 사용하여 에지 컴퓨팅 애플리케이션(AI)을 실행할 수 있다는 것입니다. TensorFlow™ Lite를 사용하여 기계 학습 모델을 만들고 Arduino IDE를 사용하여 보드에 업로드할 수 있습니다.

    2) ESP8266 ESP-01

    ESP8266 ESP-01 마이크로컨트롤러 를 허용하는 Wi-Fi 모듈입니다. Wi-Fi 네트워크 액세스 . 이 모듈은 독립형 SOC 입니다. (System On Chip) 일반적으로 Arduino에서 하는 것처럼 입력과 출력을 조작하기 위해 마이크로컨트롤러가 필요하지 않습니다. 예를 들어 ESP-01이 작은 컴퓨터처럼 작동하기 때문입니다. ESP8266의 버전에 따라 최대 9개의 GPIO(범용 입력 출력)를 가질 수 있습니다. 따라서 Wi-Fi 실드가 Arduino에 하는 것처럼 마이크로 컨트롤러에 인터넷 액세스를 제공하거나 Wi-Fi 네트워크에 액세스할 수 있을 뿐만 아니라 마이크로 컨트롤러 역할도 하도록 ESP8266을 간단히 프로그래밍할 수 있습니다. 따라서 ESP8266은 매우 다재다능합니다.

    3)Arducam Mini 2MP 플러스

    ArduCAM-2MP-Plus는 ArduCAM shield Rev.C의 최적화된 버전으로, 카메라 제어 인터페이스의 복잡성을 줄이는 고화질 2MP SPI 카메라입니다. 2MP CMOS 이미지 센서 OV2640을 통합하고 초소형 크기와 사용하기 쉬운 하드웨어 인터페이스 및 오픈 소스 코드 라이브러리를 제공합니다.

    ArduCAM mini는 SPI 및 I2C 인터페이스가 있고 표준 Arduino 보드와 잘 결합될 수 있는 한 Arduino, Raspberry Pi, Maple, Chipkit, Beaglebone black과 같은 모든 플랫폼에서 사용할 수 있습니다. ArduCAM mini는 일부 저가 마이크로컨트롤러에 없는 카메라 인터페이스를 추가할 수 있는 기능을 제공할 뿐만 아니라 단일 마이크로컨트롤러에 여러 대의 카메라를 추가할 수 있는 기능도 제공합니다.

    4) Arduino MKR WiFi 1010:

    Arduino MKR WiFi 1010은 기본적인 IoT 및 피코 네트워크 애플리케이션 설계에 가장 쉽게 진입할 수 있는 지점입니다. 사무실이나 가정용 라우터에 연결된 센서 네트워크를 구축하려고 하든, 데이터를 휴대폰으로 보내는 BLE 장치를 만들든, MKR WiFi 1010은 많은 기본 IoT 애플리케이션을 위한 원스톱 솔루션입니다. 시나리오.

    보드의 메인 프로세서는 Arduino MKR 제품군의 다른 보드와 마찬가지로 저전력 Arm® Cortex®-M0 32비트 SAMD21입니다. WiFi 및 Bluetooth® 연결은 2.4GHz 범위에서 작동하는 저전력 칩셋인 u-blox의 모듈인 NINA-W10으로 수행됩니다. 또한 Microchip® ECC508 암호화 칩을 통해 보안 통신이 보장됩니다. 그 외에도 배터리 충전기와 방향성 RGB LED가 내장되어 있습니다.

    소프트웨어 도구:

    1)Arduino 웹 편집기

    Arduino Create는 제작자와 전문 개발자가 코드를 작성하고, 콘텐츠에 액세스하고, 보드를 구성하고, 프로젝트를 공유할 수 있도록 하는 통합 온라인 플랫폼입니다. 그 어느 때보다 빠르게 아이디어에서 IoT 프로젝트를 완료하세요. Arduino Create를 사용하면 온라인 IDE를 사용하고 Arduino IoT 클라우드에 여러 장치를 연결하고 Arduino 프로젝트 허브에서 프로젝트 컬렉션을 탐색하고 Arduino 장치 관리자를 사용하여 보드에 원격으로 연결할 수 있습니다. 또한 단계별 가이드, 회로도, 참조와 함께 자신의 창작물을 공유하고 다른 사람들로부터 피드백을 받을 수 있습니다.

    2)엣지 임펄스 스튜디오:

    마이크로컨트롤러에서 ML을 실행하는 경향을 Embedded ML 또는 Tiny ML이라고도 합니다. TinyML은 데이터를 클라우드로 보낼 필요 없이 현명한 결정을 내릴 수 있는 소형 장치를 만들 수 있는 잠재력이 있습니다. 효율성과 개인 정보 보호 측면에서 좋습니다. 강력한 딥 러닝 모델(인공 신경망 기반)도 이제 마이크로 컨트롤러에 도달하고 있습니다. 지난 한 해 동안 TensorFlow Lite for Microcontrollers, uTensor 및 Arm의 CMSIS-NN과 같은 프로젝트를 통해 임베디드 하드웨어에서 딥 러닝 모델을 더 작고 더 빠르고 실행 가능하게 만드는 데 큰 진전이 있었습니다. 그러나 양질의 데이터 세트를 구축하고, 올바른 기능을 추출하고, 이러한 모델을 교육 및 배포하는 것은 여전히 ​​복잡할 수 있습니다.

    이제 Edge Impulse를 사용하여 실제 센서 데이터를 빠르게 수집하고 클라우드에서 이 데이터에 대한 ML 모델을 훈련한 다음 모델을 Arduino 장치에 다시 배포할 수 있습니다. 거기에서 단일 함수 호출로 모델을 Arduino 스케치에 통합할 수 있습니다. 그러면 센서가 훨씬 더 똑똑해져서 실제 세계의 복잡한 이벤트를 이해할 수 있습니다. 내장된 예제를 사용하면 가속도계와 마이크에서 데이터를 수집할 수 있지만 몇 줄의 코드로 다른 센서를 쉽게 통합할 수 있습니다.

    3) Thingspeak:

    ThingSpeak™는 클라우드에서 라이브 데이터 스트림을 집계, 시각화 및 분석할 수 있는 IoT 분석 서비스입니다. ThingSpeak는 장치에서 ThingSpeak에 게시한 데이터의 즉각적인 시각화를 제공합니다. ThingSpeak에서 MATLAB® 코드를 실행할 수 있으므로 온라인 분석을 수행하고 데이터가 들어오는 대로 처리할 수 있습니다. ThingSpeak는 분석이 필요한 프로토타입 및 개념 증명 IoT 시스템에 자주 사용됩니다.

    Rest API 또는 MQTT를 사용하여 인터넷에 연결된 모든 장치에서 ThingSpeak로 직접 데이터를 보낼 수 있습니다. 또한 The Things Network, Senet, Libelium Meshlium 게이트웨이 및 Particle.io와의 클라우드-클라우드 통합을 통해 LoRaWAN® 및 4G/3G 셀룰러 연결을 통해 센서 데이터가 ThingSpeak에 도달할 수 있습니다.

    구현 시작하기:

    첫 번째 프로젝트 :Arduino BLE 33 센스에서 음성 인식 TinyML 모델을 사용하는 터치 프리 엘리베이터 시스템:

    우리는 통근하면서 하루에도 몇 번씩 엘리베이터를 이용합니다! 일반적으로 사용되는 스위치는 이전에 이미 리프트를 만진 모든 사람들에 의해 오염되었습니다. 그래서 음성 명령을 사용하고 IoT 또는 Wi-Fi 네트워크를 사용하지 않고 협업하고 작업을 수행하는 엘리베이터용 터치 프리 솔루션을 만들기로 결정했습니다. 제스처 제어 또는 초음파 센서를 사용하여 작업을 수행하는 터치 프리 엘리베이터 시스템의 솔루션이 구현되었지만 이러한 센서가 직면한 문제는 더 가까운 거리에서 활성화해야 하므로 터치의 위험이 증가한다는 것입니다. 또한 이러한 센서는 매우 민감하여 작은 물체가 방해가 되어도 활성화됩니다. 이에 필자는 아두이노 33 BLE 센스에서 음성 감지를 이용하여 보다 정확한 솔루션을 만들어보자고 제안했다. 이 모델은 "up"의 두 가지 명령을 사용합니다. 또는 "아래로" 따라서 스위치의 각 버튼을 누르기 위해 데이터를 서보에 보냅니다. 스위치를 활성화하기 위해 데이터를 서보로 보내는 아이디어는 대다수의 사회가 미리 구축된 스위치 및 디스플레이 시스템을 가지고 있기 때문에 이러한 기존 시스템을 방해하지 않고 기능을 제어하기 위해 외부 하드웨어 시스템을 추가하기 위해 만들어졌습니다.

    이 기능을 수행하는 데 사용되는 핵심 논리는 다음과 같습니다.

    Edge Impulse Studio에서 "up" 및 "Speech" 명령을 해석하도록 모델 훈련

    a) 훈련 및 테스트 데이터 세트로 원시 데이터 축적. 여기에서 "up" 및 "down"의 각 데이터 지속 시간이 2초인 1분 30분의 데이터를 축적했습니다.

    b) 필수 매개변수를 기반으로 임펄스 생성:

    여기에서 창 증가 크기를 300ms로 설정하고 마이크 및 가속도계 데이터 전용인 Keras 신경망을 기반으로 훈련했습니다.

    c) 원시 데이터를 처리된 데이터로 변환. Raw 데이터 바로 아래에서 Raw 데이터의 특징을 볼 수 있으며 처리된 데이터는 cepstral 계수를 기반으로 한 DSP 결과로 보입니다.

    d) 처리된 특징을 생성하기 위해 입력 데이터 훈련. 임펄스를 더 잘 훈련시키고 모든 유형의 음성을 기반으로 음성 인식을 정확하게 훈련시키기 위해 동일한 데이터를 기반으로 저주파 및 고주파 데이터를 녹음했습니다. 여기에서 x, y 및 z 축에 대한 중심 미분을 사용하여 "S" 곡선에서 Feature 출력을 얻습니다.

    e) 마지막으로 Edge Impulse Neural Network Classifier에서 신경망 아키텍처를 설계하고 네트워크를 훈련합니다.

    여기서 데이터 입력을 위해 설계된 신경망 아키텍처는 다음과 같습니다.

    <울>
  • 입력 레이어
  • 레이어 모양 변경
  • 1D 컨보 풀 레이어(뉴런 30개, 커널 5개)
  • 1D 컨보 풀 레이어(뉴런 10개, 커널 5개)
  • 평면화 레이어
  • 모델은 91.2%의 평균 정확도와 0.29 손실로 상당히 잘 수행되었습니다. 모델은 100번의 훈련 주기(에포크)로 훈련되었습니다. 혼동 행렬은 대부분의 나머지 데이터가 각각의 레이블이 지정된 클래스와 일치하므로 매우 명확하고 정확해 보입니다.

    모델을 훈련시킨 후 테스트 데이터와 라이브 데이터로 모델을 테스트했는데 24초 테스트 데이터를 기준으로 모델의 정확도는 75%였습니다.

    마지막으로 모델을 훈련하고 로직으로 적절한 정확도를 얻은 후 Arduino 라이브러리로 모델을 배포하고 Arduino 33 BLE 센스에 배포했습니다.

    스크립트가 준비된 후 사용자 정의의 용이성을 위해 Arduino 웹 편집기에서 편집하기 시작했으며 Arduino 33 BLE Sense에 배포할 수 있는 최종 스크립트 출력은 다음과 같습니다.

    여기에서 Arduino 웹 편집기 플랫폼의 main.ino 파일을 볼 수 있습니다.

    Main-script.ino - Arduino 웹 편집기

    Main-script.ino - Github

    Arduino 33 BLE Sense에 배포

    이 글을 쓰는 시점에서 마이크가 내장된 Arduino 보드는 Arduino Nano 33 BLE Sense뿐이므로 이 섹션에서 사용할 것입니다. 다른 Arduino 보드를 사용하고 자신의 마이크를 연결하는 경우 구현해야 합니다.

    이 글을 쓰는 시점에서 마이크가 내장된 Arduino 보드는 Arduino Nano 33 BLE Sense뿐이므로 이 섹션에서 사용할 것입니다.

    Arduino Nano 33 BLE Sense에는 LED가 내장되어 있어 단어가 인식되었음을 나타내고 기능을 수행하도록 서보를 제어하는 ​​데 사용합니다.

    다음은 모델의 마이크로 기능 코드 스니펫입니다.

    명령에 대한 응답에 대한 논리는 다음과 같이 작동합니다.

    // 명령이 들리면 해당 LED를 켜고 해당 서보를 켭니다.
    if (found_command[0] =='up') {
    last_command_time =current_time;
    디지털 쓰기(LEDG, LOW); // 녹색은 up, LOW 명령으로 LED 켜기
    servo_7.write(0); // "위"라고 말할 때 리프트의 버튼을 클릭하기 위해 서보를 0도 회전
    delay(100);
    digitalWrite(LEDG, HIGH); // 명령 플래시 후 LED 끄기
    servo_7.write(180); // 서보가 원래 위치로 돌아갑니다. 즉 180도
    }
    // 명령이 들리면 해당 LED를 켜고 적절한 서보를 켭니다.
    if (found_command[0] =='다운') {
    last_command_time =current_time;
    digitalWrite(LEDG, LOW); // 녹색은 up, LOW 명령으로 LED 켜기
    servo_7.write(0); // "아래로"라고 말할 때 리프트의 버튼을 클릭하도록 서보를 0도 회전
    delay(100);
    digitalWrite(LEDG, HIGH); // 명령 플래시 후 LED 끄기
    servo_7.write(180); // 서보가 원래 위치로 돌아갑니다. 즉 180도
    }

    두 번째 명령 응답은 위와 같지만 "down" 명령이 들리면 서보를 돌립니다.

    데이터 훈련 후 다음과 같이 훈련된 tflite 데이터 세트 출력을 얻습니다.

    여기에서 우리는 Arduino Nano 33 BLE Sense에 내장된 마이크를 사용하고 있으며 모델은 Arduino Nano 보드의 256kb 플래시 메모리에서 평균 ~24-28Kb를 사용합니다. 이 모델은 이미지 인식 모델에 비해 상대적으로 무게가 가볍고 훨씬 빠른 속도로 정보를 처리할 수 있습니다.

    음성 인식 모델의 논리는 다음과 같습니다.

    <울>
  • 데이터 캡처됨
  • 마이크에서 오디오 샘플 캡처
  • 미가공 오디오 데이터를 스펙트로그램으로 변환
  • Tflite 인터프리터가 모델을 실행합니다.
  • 추론 출력을 사용하여 명령이 들렸는지 여부를 결정합니다.
  • 하향 명령이 들리면 리프트 컨트롤 패널의 하강 키를 눌러 서보가 움직입니다.
  • 상향 명령이 들리면 승강기 조작반의 위쪽 키를 누르면 서보가 움직입니다.
  • 위는 훈련된 모델의 라이브러리에 있는 데이터입니다. 주요 기능은 Tensorflow 파일에 포함되어 있습니다.

    프로젝트의 회로도:

    Arduino Nano 33 BLE Sense는 원시 데이터 수집을 위해 내장 마이크를 사용합니다. Arduino에서 추론하는 데 시간이 걸리므로 데이터를 정확하게 처리하기 위해 100밀리초의 지연을 추가했습니다. 따라서 두 개의 녹음 샘플 사이에는 내장된 파란색 LED가 깜박이며 두 개의 LED 깜박임 사이에 들리거나 말해야 하는 응답을 나타냅니다.

    이것은 성공적인 두 마이크 입력 간격 사이의 LED 깜박임을 보여주는 Arduino 33 BLE Sense의 시뮬레이션입니다.


    명령 입력에 따라 서보가 그에 따라 회전합니다.

    이제 초음파 센서 또는 제스처 센서를 사용하는 것과 같은 터치 프리 엘리베이터 자동화 시스템에 대한 다른 대안이 있지만 둘 다 자체 결함이 있으므로 음성 제어 엘리베이터 자동화 시스템을 만들기로 결정했습니다.

    초음파 센서의 결함: 초음파 센서는 움직임에 매우 민감합니다. 통로에서 움직이는 물체가 초음파 센서의 범위에 들어오면 활성화됩니다. 초음파 센서도 정확하지 않아 때때로 잘못된 정보를 처리합니다.

    제스처 제어 센서의 결함: 이 센서는 초음파 센서보다 정확하지만 더 가까운 거리에서 활성화해야 합니다. 이렇게 하면 손과 리프트 패널이 만질 위험이 높아집니다.

    그러나 음성 제어 엘리베이터 패널은 위의 두 솔루션보다 정확하며 초음파 및 제스처 센서보다 더 먼 거리에서도 활성화될 수 있습니다.

    다음은 정확도 대 활성화 거리의 그래프와 이러한 센서의 위치입니다.

    제스처 센서를 활성화하는 데 필요한 거리를 보여줍니다. 이 거리가 정말 적게 보이기 때문에 오염률이 높다

    다음으로 메인 프로젝트의 두 번째 하위 프로젝트 부분으로 이동합니다.

    두 번째 프로젝트:얼굴 인식 및 TinyML을 사용한 스마트 인터콤 시스템:

    CDC는 사이트를 업데이트하고 새로운 코로나바이러스로 오염된 표면에서 간접적인 접촉(인자 전파라고 함)이 새로운 코로나바이러스에 감염될 수 있는 잠재적인 방법이라는 보도 자료를 발표했습니다.

    연구에 따르면 새로운 코로나바이러스는 플라스틱 및 금속 표면과 판지에서 24시간 동안 최대 3일 동안 지속될 수 있습니다. 그러나 사람이 오염된 표면을 만지는 것으로 인해 코로나19에 감염되기 위해서는 많은 일이 일어나야 합니다.

    첫째, 사람이 실제로 감염을 일으킬 만큼 충분한 양의 바이러스와 접촉해야 합니다. 예를 들어, 인플루엔자 바이러스에 감염되려면 수백만 개의 바이러스 사본이 표면에서 사람의 얼굴에 도달해야 하지만 바이러스가 직접 폐로 들어갈 경우에는 수천 사본만 필요합니다. 뉴욕 시간 보고합니다.

    사람이 바이러스의 흔적이 많이 남은 표면을 만지면 바이러스를 충분히 흡수한 다음 눈, 코 또는 입을 만져야 합니다. 따라서 공중 보건 전문가는 만지지 않는 것이 매우 중요하다고 말합니다. 표면을 너무 자주 만지고 오염된 물건이나 자주 만지는 물건을 만지지 않도록 합니다.

    코로나19 바이러스가 전 세계적으로 확산되고 있습니다. 마침내 가라앉더라도 사람들은 공공장소에서 물건을 만지는 것에 대한 감수성을 키울 것입니다. 대부분의 인터콤은 버튼을 눌러야 전화를 걸 수 있도록 설계되었기 때문에 사람이 아무 것도 만지지 않아도 되는 비접촉 인터콤 솔루션이 필요하다고 결정했습니다.

    위 이미지는 안면인식 기반 인터콤 시스템이 구현된 모습입니다.

    터치 기반 시스템의 문제를 해결하려면 접촉 및 터치 장소를 줄여야 합니다. 전통적인 인터콤 시스템에서는 스위치 기반 시스템으로 구성되어 있으며 누르면 벨이 울립니다. 표면 오염 위험을 증가시키는 터치 기반 시스템을 개선하기 위해 저는 tinyML 및 Tensorflow Lite를 기반으로 Arduino 33 BLE Sense에 배포된 터치 없는 얼굴 인식 시스템을 구축하기로 결정했습니다.

    이 스마트 인터콤 시스템에서는 사람을 식별하고 이에 따라 "사람"을 읽는 LED 매트릭스 디스플레이로 벨을 울리는 Arduino 33 BLE Sense에 배포된 사람 감지 알고리즘을 사용했습니다.

    스마트 인터콤 시스템의 구현을 향해:

    이 모델을 설계하는 데 다음 소프트웨어가 사용되었습니다.

    <울>
  • TensorFlow 라이트
  • Arduino 웹 편집기
  • 이 사람 감지 모델에서는 프로젝트에 적합한 사전 훈련된 TensorFlow 사람 감지 모델을 사용했습니다. 이 사전 훈련된 모델은 3개의 클래스로 구성되며 그 중 세 번째 클래스에는 정의되지 않은 데이터 세트가 있습니다.

    "unused",

    "person",

    "notperson"

    In our model we have the Arducam Mini 2mp plus to carry out image intake and this image data with a decent rate of fps is sent to the Arduino Nano 33 BLE Sense for processing and and classification. Since the Microcontroller is capable of providing 256kb RAM, we change the image size of each image to a standard 96*96 for processing and classification. The Arduino Tensorflow Lite network consists of a deep learning framework as:

    <울>
  • Depthwise Conv_2D
  • Conv_2D
  • AVERAGE Pool_2D
  • Flatten layer
  • This deep learning framework is used to train the Person detection model.

    The following is the most important function defined while processing outputs on the Microcontroller via Arduino_detetction_responder.cpp

    // Process the inference results.
    uint8_t person_score =output->data.uint8[kPersonIndex];
    uint8_t no_person_score =output->data.uint8[kNotAPersonIndex];
    RespondToDetection(error_reporter, person_score, no_person_score);

    In the following function defining, the person_score , the no_person_score have been defined on the rate of classification of the data.

    The Logic works in the Following way:

    ├── Autonomous Intercom System
    ├── Arducam Mini 2mp plus
    │ ├── Visual data sent to Arduino
    ├── Arduino 33 BLE Sense
    │ ├── if person-score> no_person_score
    │ │ ├── Activate the buzzer
    │ │ ├── Display "Person" on the LED Matrix
    │ │ └── ...Repeat the loop

    Adhering to the above logic, The arducam Mini 2mp plus continuously takes in visual data and sends this data to the Arduino 33 BLE Sense to process and classify the the data collected. Once the raw data is converted to processed data, it is then classified as per the data trained. If a person is detected, the Arduino sends a signal to the buzzer to activate and the MAX7219 to display "person". In this way, the logic of the system works.

    Functioning and Working of Logic in Code:

    The following are the Libraries included in themain.ino code for functioning of the model.

    #include 

    #include "main_functions.h"

    #include "detection_responder.h"
    #include "image_provider.h"
    #include "model_settings.h"
    #include "person_detect_model_data.h"
    #include "tensorflow/lite/micro/kernels/micro_ops.h"
    #include "tensorflow/lite/micro/micro_error_reporter.h"
    #include "tensorflow/lite/micro/micro_interpreter.h"
    #include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
    #include "tensorflow/lite/schema/schema_generated.h"
    #include "tensorflow/lite/version.h"

    In the following code snippet, the loop is defined and performed. Since this is the main.ino code, it controls the core functioning of the model - used to run the libraries in the model.

    void loop() {
    // Get image from provider.
    if (kTfLiteOk !=GetImage(error_reporter, kNumCols, kNumRows, kNumChannels,
    input->data.uint8)) {
    TF_LITE_REPORT_ERROR(error_reporter, "Image capture failed.");
    }

    // Run the model on this input and make sure it succeeds.
    if (kTfLiteOk !=interpreter->Invoke()) {
    TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed.");
    }

    TfLiteTensor* output =interpreter->output(0);

    // Process the inference results.
    uint8_t person_score =output->data.uint8[kPersonIndex];
    uint8_t no_person_score =output->data.uint8[kNotAPersonIndex];
    RespondToDetection(error_reporter, person_score, no_person_score);
    }

    In the following code snippet, the necessary libraries required to inference the image to be captured is displayed. The images after captured are converted to a 96*96 standardised size which can be interpreted on the arduino board.

    Here, the Arducam mini 2mp OV2640 library has been utilised.

    This code has been provided in the arduino_image_provider.cpp snippet

    #if defined(ARDUINO) &&!defined(ARDUINO_ARDUINO_NANO33BLE)
    #define ARDUINO_EXCLUDE_CODE
    #endif // defined(ARDUINO) &&!defined(ARDUINO_ARDUINO_NANO33BLE)

    #ifndef ARDUINO_EXCLUDE_CODE

    // Required by Arducam library
    #include
    #include
    #include
    // Arducam library
    #include
    // JPEGDecoder library
    #include

    // Checks that the Arducam library has been correctly configured
    #if !(defined OV2640_MINI_2MP_PLUS)
    #error Please select the hardware platform and camera module in the Arduino/libraries/ArduCAM/memorysaver.h
    #endif

    // The size of our temporary buffer for holding
    // JPEG data received from the Arducam module
    #define MAX_JPEG_BYTES 4096
    // The pin connected to the Arducam Chip Select
    #define CS 7

    // Camera library instance
    ArduCAM myCAM(OV2640, CS);
    // Temporary buffer for holding JPEG data from camera
    uint8_t jpeg_buffer[MAX_JPEG_BYTES] ={0};
    // Length of the JPEG data currently in the buffer
    uint32_t jpeg_length =0;

    // Get the camera module ready
    TfLiteStatus InitCamera(tflite::ErrorReporter* error_reporter) {
    TF_LITE_REPORT_ERROR(error_reporter, "Attempting to start Arducam");
    // Enable the Wire library
    Wire.begin();
    // Configure the CS pin
    pinMode(CS, OUTPUT);
    digitalWrite(CS, HIGH);
    // initialize SPI
    SPI.begin();
    // Reset the CPLD
    myCAM.write_reg(0x07, 0x80);
    delay(100);
    myCAM.write_reg(0x07, 0x00);
    delay(100);
    // Test whether we can communicate with Arducam via SPI
    myCAM.write_reg(ARDUCHIP_TEST1, 0x55);
    uint8_t test;
    test =myCAM.read_reg(ARDUCHIP_TEST1);
    if (test !=0x55) {
    TF_LITE_REPORT_ERROR(error_reporter, "Can't communicate with Arducam");
    delay(1000);
    return kTfLiteError;
    }

    The following code is the Arduino_detection_responder.cpp code which controls the main output of the model. Here, we have taken into consideration, the classification score as defined in the main.ino code and according to the confidence of person score, I am providing outputs.

    // Switch on the green LED when a person is detected,
    // the red when no person is detected
    if (person_score> no_person_score) {
    digitalWrite(LEDG, LOW); // if a person is detected at the door, the buzzer switches on
    digitalWrite(LEDR, HIGH); // the led matrix in the house displays "person"
    digitalWrite(5, LOW);
    myDisplay.setTextAlignment(PA_CENTER);
    myDisplay.print("Person");
    delay(100);
    } else {
    digitalWrite(LEDG, HIGH);
    digitalWrite(LEDR, LOW);
    }

    TF_LITE_REPORT_ERROR(error_reporter, "Person score:%d No person score:%d",
    person_score, no_person_score);
    }

    #endif // ARDUINO_EXCLUDE_CODE

    Working of the Firmware:

    This is the complete setup of the firmware designed on Fritzing.

    This simulation shows the capture of data by the Arducam and similarly classification of this data by Arduino 33 BLE Sense

    This model comprises of the following firmware used:

    <울>
  • Arduino 33 BLE sense - Used to process the data gathered, classifies the data processes, sends the command according to the logic fed.
  • Buzzer - For Alerting when a person is at the door.
  • Arducam Mini 2mp plus - Continuous Raw data image accumulation from source.
  • Adafruit lithium ion charger - Used to deliver charge through the lithium battery
  • Lithium ion Battery - power source
  • MAX7219 4 in 1 display - Used for displaying "person" on the display screen.
  • Additional Features: Using the existing intercom system, it is possible to add a servo to push the button to view the person who is standing at the door as shown in the image:

    This can be an additional setup in intercom system to switch on video when a person is detected. However, this additional system has to be deployed on the Existing Intercom system.

    Additional code added to the existing code:

    // Switch on the green LED when a person is detected,
    // the red when no person is detected
    if (person_score> no_person_score) {
    digitalWrite(LEDG, LOW); // if a person is detected at the door, the buzzer switches on
    digitalWrite(LEDR, HIGH); // the led matrix in the house displays "person"
    digitalWrite(5, LOW);
    myDisplay.setTextAlignment(PA_CENTER);
    myDisplay.print("Person");
    servo_8.write(0); // this switches on the intercom by rotating servo
    delay(500);
    servo_8.write(180); // this switches off the intercom by rotating servo
    delay(100);
    } else {
    digitalWrite(LEDG, HIGH);
    digitalWrite(LEDR, LOW);

    I have added an additional function which rotates the servo to switch on the existing intercom and then switches back to its original place.

    3rd Project:Autonomous IoT based person temperature sensing automation:

    The temperature sensor for Arduino is a fundamental element when we want to measure the temperature of a process or of the human body.

    The temperature sensor with Arduino must be in contact or close to receive and measure the heat level. That's how thermometers work.

    These devices are extremely used to measure the body temperature of sick people, as the temperature is one of the first factors that change in the human body, when there is an abnormality or disease.

    One of the diseases that alter the temperature of the human body is COVID 19. Therefore, we present the main symptoms:

    <울>
  • Cough
  • Tiredness
  • Difficulty breathing (Severe cases)
  • Fever
  • Fever is a symptom whose main characteristic is an increase in body temperature. In this disease, we need to constantly monitor these symptoms.

    The retail market has been hit with a great impact due to the pandemic. After the malls and supermarkets have re-begun, it is necessary to ensure safety of all the customers who have entered the premises. For this purpose, manual temperature checking techniques have been set up. This increases the labour and also a risk of contact between the person checking the temperature and the person whose temperature is being check. This is demonstrated as follows:

    This image shows the close contact or less social distance maintained between the two persons.

    There is a second flaw which is faced in manual temperature checking system:

    For the temperatures which have been checked, the data of these recordings is not stored or not synchronised with an external device for monitoring the temperatures measured.

    Taking in all these cons into consideration, I've come up with an Arduino based IoT solution deployed on the Arduino MKR WiFi 1010. The temperatures are measured using the Adafruit AMG8833 temperature module. Whenever a person is detected on the gate, the ultrasonic sensors, send the information to the Arduino MKR WiFi 1010 to give a command to the AMG 8833 module to take in temperature data. The module captures the data accurately and the data is projected on an IoT dashboard in real time. If an abnormal temperature reading of a person is detected, an alarm is set on so that the mall security and staff can immediately investigate upon the matter. The data collected is each given a timestamp and can be viewed on the ThingSpeak dashboard the temperature recording vs Time graph for each data.

    Similarly, it can be also traced on which days and which time range, does the supermarket or the mall show abnormal temperature readings and more security measures can be implemented accordingly.

    The below image shows wherethe setup can be embedded(The area where the setup can be installed)

    On the entry checking gate, HC-SR04 Ultrasonic Sensors can be installed which detects the entry of a person and sends the command to the Arduino MKR WiFi 1010 if the person is detected in a 20cm range. The Arduino microcontroller passes on the same command to the AMG 8833 temperature module to read the temperature of the person. This complete process takes time from the Ultrasonic Sensor detecting the person, to sending the command to the temperature module to detect the temperature. Hence, in order to sync with the delay, the temperature module is attached a bit far way from the Ultrasonic sensor.

    Whenever a person walks in, the gate is opened ( Here, in the prototype, we are using the servo to function as a gate, but in further implementation of the project, the servo will be replaced by a heavy duty gate motor and controlled via motor driver, wherein the Arduino will be sending commands ). The temperature reading of the person is taken and this reading is sent to the thingspeak IoT dashboard in real time via the Arduino WiFi 1010. For this, an active wifi connection is required in the mall premises which is usually available in most of the cases. According to Research, the body temperature of a person with fever is approximately 38.1C which is 104F. Hence, if the Temperature module detects a temperature above this threshold, the servo turns 180Degrees and the Buzzer goes on to alarm people about the Person. Similarly, security and other mall staff can reach the area in time to control the situation when such a case happens.

    The Logic works in the Following way:

    ├── PersonTemperature Detection
    ├── HC-Sr04 Ultrasonic sensor
    │ ├── Person Detection
    │ │ ├── If( Person distance =20) ,send command
    ├── Arduino
    MKR WiFi 1010
    │ ├── if Ultrasonic sensor Sends a command;
    │ │ ├── Open the gate - Servo(0)
    │ │ ├── Activate the AMG8833 to take readings
    │ │ ├── Send the Readings to ThingSpeak Dashboard
    │ │ | ├── If Temperature> 38.1C
    │ │ │ | ├── Close the gate - servo(180)
    │ │ │ │ ├── Activate the Buzzer
    │ │ └── ...Repeat the loop

    In this way, the complete Logic of the Temperature Monitor Functions.

    The circuit Diagram for the Temperature Model is given as follows:

    This is the complete Firmware and setup used in the project

    This simulation shows the data type captured by the AMG 8833 Thermal Cam and this data is sent to the Arduino MKR WiFi 1010 to transfer commands

    This simulation shows the distance captured by ultrasonic Sensor and this command is sent to the Arduino MKR WiFi to activate the AMG 8833 thermal sensor

    Instead of a servo based door opening system, I will be implementing a command system using motor driver to automate door sliding opening system as follows:

    Setting up the IoT Dashboard:

    Using Thingsepak IoT Dashboard Setup:

    ThingSpeak™ is an IoT analytics service that allows you to aggregate, visualize, and analyze live data streams in the cloud. ThingSpeak provides instant visualizations of data posted by your devices to ThingSpeak. With the ability to execute MATLAB® code in ThingSpeak, you can perform online analysis and process data as it comes in. ThingSpeak is often used for prototyping and proof-of-concept IoT systems that require analytics.

    Since, ThingSpeak was easy to setup and use, I preffered to go with that dashboard. This interface allows the user to share the dashboard the security or staff department in the mall so that they can continuously monitor the people visiting the mall, and similarly impose restrictions at certain times when they feel the temperature risk is higher.

    The above image represents the versatility of the thingspeak dashboard and the creative visualisations potrayed.

    Logic used in the ThingSpeak IoT data collection process

    This image shows the creation of visualisations in different channels. Here, I've created a Temperature vs Time Graph in the visualisation section. The data collected by the AMG8833 sensor will be each allocated a timestamp and will be plotted on the graph to see the time for which each data is captured.

    The data collected can be viewed in real time on the Public Dashboard here; ThingSpeak Temperature Dashboard

    Similarly, this plot can be integrated cumulatively to a single Dashboard made open to the visitors of the mall to view the temperature data before entering the mall. If the visitors find an abnormal temperature reading at a particular day, they can prefer not to go to the supermarket or mall at that day to be safe,

    The visualisation Integration data chart:

     

    This chart can be embedded at a single dashboard with the readings of the charts of other malls and supermarkets in the particular area and hence, a unique visitor can check which mall is performing better than other mall in terms of safety and can prefer going to the mall where safety guidelines are followed and the people entering are not diagnosed with fever. This can help the Government help establish safety and trust within the people along with the re-opening of malls and retail sectors

    The code of the following project can be either viewed in Github or in the Arduino Web editor here; Temperature-Model-code

    The logic in the code goes as follows:

    #include 
    #include
    #include
    #include ESP32_Servo.h
    #include
    #include
    Servo servo1;
    int trigPin =9;
    int echoPin =8;
    long distance;
    long duration;

    WiFiMulti WiFiMulti;

    const char* ssid ="JioFiber-tcz5n"; // Your SSID (Name of your WiFi) - This is a dummy name, enter your wifi ssid here
    const char* password ="**********"; //I have not mentioned the password here, while running the cript, you may mention your pwd

    const char* host ="api.thingspeak.com/1121969";
    String api_key ="8LNG46XKJEJC89FE"; // Your API Key provied by thingspeak


    Adafruit_AMG88xx amg;

    Here, I have defined some of the Libraries and Firmware along with the required setup for the WiFi host and the API-Key for the temperature dashboard on thingspeak

    Here's an image of the Code in progress:

    The next part of the code is Setting up the required components before the actual loop begins:

    These are the prerequisites before looping the actual code. I have defined the Servo pin and the Ultrasonic sensors while also begun the testing of the AMG 8833 to see if it can read data or if its connected

    void setup()
    {
    Serial.begin(9600);
    servo1.attach(7);
    pinMode(trigPin, OUTPUT);
    pinMode(echoPin, INPUT);// put your setup code here, to run once:
    }



    {
    Connect_to_Wifi();
    Serial.println(F("AMG88xx test"));

    bool status;

    // default settings
    status =amg.begin();
    if (!status) {
    Serial.println("Could not find a valid AMG88xx sensor, check wiring!");
    while (1);
    }

    Serial.println("-- Thermistor Test --");

    Serial.println();

    delay(100); // let sensor boot up
    }

    The next part is the Void Loop where the complete function is carried out. Here, Initially I have set the gate to open to allow the entry of each person. The AMG 8833 performs data collection and reads the temperature of the people coming inside. If the temperature is higher than the expected threshold, the gate is closed and an alarm(buzzer) is set on to alert people and not allow the entry of the person

    void loop() 
    ultra();
    servo1.write(0);
    if(distance <=20){
    Serial.print("Thermistor Temperature =");
    Serial.print(amg.readThermistor());
    Serial.println(" *C");

    Serial.println();
    // call function to send data to Thingspeak
    Send_Data();
    //delay
    delay(50);

    if(amg.readThermistor()> 38.1) // if a person with fever is detetcted, he is not allowed to enter
    // a person with fever has an avg body temperature of 38.1degree celsius
    servo1.write(180);
    digitalWrite(6, HIGH); //Turns on the buzzer to alarm people
    }

    The last part is sending the data to the ThingSpeak Dashboard:

    Here, since I have only one field in the channel, all the data will be sent to that field. The captured, amg.readThermistor() data is sent to the dashboard.

    void Send_Data()
    {

    Serial.println("Prepare to send data");

    // Use WiFiClient class to create TCP connections
    WiFiClient client;

    const int httpPort =80;

    if (!client.connect(host, httpPort)) {
    Serial.println("connection failed");
    return;
    }
    else
    {
    String data_to_send =api_key;
    data_to_send +="&field1=";
    data_to_send +=string(amg.readThermistor());
    data_to_send +="\r\n\";

    client.print("POST /update HTTP/1.1\n");
    client.print("Host:api.thingspeak.com\n");
    client.print("Connection:close\n");
    client.print("X-THINGSPEAKAPIKEY:" + api_key + "\n");
    client.print("Content-Type:application/x-www-form-urlencoded\n");
    client.print("Content-Length:");
    client.print(data_to_send.length());
    client.print("\n\n");
    client.print(data_to_send);

    delay(200); // reduced delay to perform real time data collection
    }

    client.stop();

    }

    This ends the code section of the project and we move on to explanation of the use and GO TO MARKET part of the project

    The above image shows the implementation of the methodology and model in supermarkets and malls.

    GO TO MARKET &PRACTICALITY:

    <울>
  • Malls and Supermarkets can use this to identify Abnormal Temperature data and this data can be observed even for a certain day at a certain point of time.
  • Implement Strategies using this data to ensure Safety and Compliance.
  • Decrease Labour and automate Temperature Monitoring Process
  • Offer Dashboard to the visitors to monitor if the mall is safe and and accordingly visit the mall at the safest point of time.
  • This product can be used to ensure the visitors that the mall is a safe place and hence, can increase the sales and visits following Government guidelines
  • Companies offering IoT based solutions can invest in this product for mass production and distribution.
  • The more the supermarkets using this product, the more the access to data to the government and more the choice to customers to select the preferable safest place in their locality.
  • Comparatively affordable solution as compared to manual temperature monitoring as it decreases the labour cost + decreases the rate of infection when compared to manual monitoring where the person taking temperature has to be close to the visitor to capture the temperature.
  • 4th Project:TinyML &IoT based queue monitoring and establishing system deployed on the Arduino 33 BLE Sense:

    Just as the mall and the retail sector has opened, queue management in malls and supermarkets has become a big problem. Due to the pandemic, restricting only a certain amount of people to go inside the mall has has to be followed by the malls to ensure safety compliance. But this is done manually which increases labour work. Also the data for the number of people inside the mall at a given point of time is not available to the visitors of the mall. If this data would have been made available to the the visitors of the mall, it would increase the percentage of people visiting the mall.

    Trying to run a shop or a service during the ongoing Corona crisis is certainly a challenge. Serving customers while keeping them and employees safe is tricky, but digital queuing can help a lot in this regard. The technologies behind virtual queues are not entirely new; the call for social distancing just highlights some of the many benefits they offer.

    Most countries have introduced legal measures to combat the spread of COVID-19. To ensure customer satisfaction whilst adhering to the new regulations, one thing is for sure:long queues and crowded lobbies need to go. Digital queuing (also referred to as virtual or remote queuing) technology allows businesses to serve their customers in a timely manner while they stay out of harm’s way.

    Generally speaking, you will likely find one or more of the following types of queue management solutions in a given retail environment:

    <울>
  • Structured queues: Lines form in a fixed, predetermined position. Examples are supermarket checkouts or airport security queues.
  • Unstructured queues: Lines form naturally and spontaneously in varying locations and directions. Examples include taxi queues and waiting for consultants in specialist retail stores.
  • Kiosk-based queues: Arriving customers enter basic information into a kiosk, allowing staff to respond accordingly. Kiosks are often used in banks, as well as medical and governmental facilities.
  • Mobile queues: Rather than queuing up physically, customers use their smartphones. They do not have to wait in the store but rather can monitor the IoT Dashboard to see wait time at the store.
  • Long queues, whether they are structured or unstructured, often deter walk-in customers from entering the store. Additionally, they limit productivity and cause excess stress levels for customers and staff.

    Does effective queue management directly affect the customer experience?

    There is an interesting aspect about the experience of waiting in line:The waiting times we perceive often do not correspond with the actual times we spent in line. We may attribute a period of time falsely to be “longer” than normal or deem another period “shorter” despite it actually exceeding the average waiting time. For the most part, this has to do with how we can bridge the time waiting.

    “Occupied time (walking to baggage claim) feels shorter than unoccupied time (standing at the carousel). Research on queuing has shown that, on average, people overestimate how long they’ve waited in a line by about 36 percent.”

    The main reason that a customer is afraid to visit any supermarket is the problem of insufficient data. Visitors do not know the density of people inside the mall. The higher the number of people inside the mall, the higher is the risk of visiting the mall. Manually calculating the number of people who go enter and exit the mall and updating this data in real time is not possible. Also, a vistor does not know which day and at which time is is best suited to visit the mall. The visitor also does not know the wait time of each mall so that he can go with other supermarkets nearby if the wait time over there is less. As a result, all this leads to less conversion of people and accordingly less number of people visiting the mall. If the visitors have the access to population data, they have a sense of trust and leads to increase in sales of malls. Hence, I've come up with an Arduino Based TinyML and IoT solution to make this data available to the visitors and also increase the conversion of visitors in the mall by following the necessary safety guidelines. If the visitors have the access to population data, they have a sense of trust and leads to increase in sales of malls. Hence, I've come up with an Arduino Based TinyML and IoT solution to make this data available to the visitors and also increase the conversion of visitors in the mall by following the necess

    This solution is based on computer vision and person detetction algorithm based on tensorflow framework.

    It functions in the following way :

    This solution is implemented on the gates of the mall or the supermarket. The Arducam mini 2mp keeps capturing image data and sends it to the Arduino 33 BLE Sense to process and classify this data. If a person is detected, The arduino increases the count of the stored data of number of people inside the mall by 1. Since a person is detected, the Servo motor rotates, opening the entrance to allow the person inside. For each person allowed inside, the data is sent to the ThingSpeak IoT dashboard which is open for the vistors to view.

    When the person count increases the 50 threshold limit(This threshold can be altered depending upon the the supermarket size), the gate is closed and a wait time of 15min is set until the customers inside exit the store. The wait time is then displayed on the LED matrix display screen so that the customers in the queue can know the duration they have to wait for.

    The people can also keep a track of the number of people inside the store. The number of people allowed to enter inside the store at a single time is 50 people.

    The physical queuing unit of this product, helps in establishing queues while the IoT Dashboard helps in projecting the total count of the customers going in the store and total count of the customers going out of the store. Currently this is the dashboard data that will be displayed but I am working on a logic for displaying the waiting time required for the customer to get inside the mall. The logic for it is pretty simple, it depends on the number of people inside the mall. The total no of people entering the mall displayed on dashboard minus the total no of people exited the mall displayed on the dashboard. This will give us an outcome of the total no of people inside the store. The next operation would be the total no of people outside the store subtracted by the threshold (limit that the store can accommodate). If the outcome is a negative integer, the waiting time would be multiplication of the negative integer by the negative of the average time a person spends inside the store. If the outcome of the operation would be a positive integer, the wait time would be none.

    Heading towards the implementation of the physical queuing system:

    The following Softwares have been used in designing this model:

    <울>
  • TensorFlow lite
  • ThingSpeak
  • Arduino Web Editor
  • In this person detection model, I have used the Pre-trained TensorFlow Person detection model apt for the project. This pre-trained model consists of three classes out of which the third class is with undefined set of data:

    "unused",

    "person",

    "notperson"

    In our model we have the Arducam Mini 2mp plus to carry out image intake and this image data with a decent rate of fps is sent to the Arduino Nano 33 BLE Sense for processing and and classification. Since the Microcontroller is capable of providing 256kb RAM, we change the image size of each image to a standard 96*96 for processing and classification. The Arduino Tensorflow Lite network consists of a deep learning framework as:

    <울>
  • Depthwise Conv_2D
  • Conv_2D
  • AVERAGE Pool_2D
  • Flatten layer
  • This deep learning framework is used to train the Person detection model.

    The following is the most important function defined while processing outputs on the Microcontroller via Arduino_detetction_responder.cpp

    // Process the inference results.
    uint8_t person_score =output->data.uint8[kPersonIndex];
    uint8_t no_person_score =output->data.uint8[kNotAPersonIndex];
    RespondToDetection(error_reporter, person_score, no_person_score);

    In the following function defining, the person_score , the no_person_score have been defined on the rate of classification of the data.

    using these defined functions, I will be using it to give certain outputs on the basis of confidence of the person-score and the no_person_score .

    The detection responder logic of the code works in the following way:

    ├── Person Detection and responder - Entry
    ├── Arducam mini 2MP Plus
    │ ├──Image and Video Data to Arduino
    ├── Arduino BLE 33 Sense
    │ ├── processing and classification of the input data
    │ │ ├── If person detected, open the gate - servo(180)
    │ │ ├── If no person detected, close the gate - servo(0)
    │ │ ├── Send the number of people entered count to to ThingSpeak Dashboard via ESP8266 -01
    │ │ | ├── If people count has exceeded 50
    │ │ │ | ├── Close the gate &wait for 15min to let the people inside move out
    │ │ │ │ ├── Display wait time on a LED Matrix
    │ │ └── ...Repeat the loop

    ├── Person Detection and responder - Exit
    ├── Arducam mini 2MP Plus
    │ ├──Image and Video Data to Arduino
    ├── Arduino BLE 33 Sense
    │ ├── processing and classification of the input data
    │ │ ├── If person detected, open the gate - servo(180)
    │ │ ├── If no person detected, close the gate - servo(0)
    │ │ ├── Send the number of people entered count to to ThingSpeak Dashboard via ESP8266 -01
    │ │ └── ...Repeat the loop

    Adhering to the logic used in the model, the Arducam mini 2mp plus will continuously capture Image data and sends this data to the Arduino 33 BLE Sense to process and classify the data. The overall model size is 125KB. If a person has been detected, the Arduino sends the command to the servo to rotate to servo to 180degree. If a person is not detected, the servo is rotated to 0degree and the gate is closed. Each time a person is detected, the the count increments by 1. If the count exceeds the 50 threshold, no more person is allowed inside and a wait time of 15min is set.

    The wait time is continuously displayed and updated on the LED Matrix dislplay.

    This count is also displayed on the ThingSpeak IoT dashboard via the ESP8266 01

    Through the Dashboard, an individual can easily view the number of people are inside at a given day at a given point of time.

    At the exit gate, the same logic is set. If a person is detected, the gate is opened while if no person is detected, the gate is closed. Each time a person is detected, the count increases by 1. This count is displayed on the ThingSpeak IoT dashboard.

    In this way one can monitor the number of people entering and the number of people exiting.

    Since the two models for entry and exit are deployed on two different microcontrollers, calculating the average wait time based on data from different microcontroller is a bit hard, but this uses a simple logic function.

    x =No of people who have entered

    Y =No of people who have exited

    X - Y =No of people who are inside the mall

    Z =Threshold of the no of people who are allowed to be inside the mall

    let Z-(X-Y) =count {this is the number of people (in negative) who have either crossed the threshold limit or are below the threshold limit

    If "count" is negative, The wait time is equal to count*(the negative of the average time a person spends inside the mall)

    if "count" is positive, the wait time is zero

    In this way, the average queue time calculating algorithm is imposed.

    Working of the Firmware:

    This is the complete setup of the firmware designed on Fritzing.

    This model comprises of the following firmware used:

    <울>
  • Arduino 33 BLE sense - Used to process the data gathered, classifies the data processes, sends the command according to the logic fed.
  • MG959 / MG995 Servo - Heavy duty servo( An external power supply may be applied) - To open and close the gates as per microcontroller command.
  • Arducam Mini 2mp plus - Continuous Raw data image accumulation from source.
  • Adafruit lithium ion charger - Used to deliver charge through the lithium battery
  • Lithium ion Battery - power source
  • ESP8266 - 01 - Used for sending data to the ThingSpeak dashboard via WiFi network.
  • MAX7219 4 in 1 display - Used for displaying the wait time on the display screen.
  • Functioning and Working of Logic in Code:

    The following are the Libraries included in themain.ino code for functioning of the model.

    #include 

    #include "main_functions.h"

    #include "detection_responder.h"
    #include "image_provider.h"
    #include "model_settings.h"
    #include "person_detect_model_data.h"
    #include "tensorflow/lite/micro/kernels/micro_ops.h"
    #include "tensorflow/lite/micro/micro_error_reporter.h"
    #include "tensorflow/lite/micro/micro_interpreter.h"
    #include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
    #include "tensorflow/lite/schema/schema_generated.h"
    #include "tensorflow/lite/version.h"

    In the following code snippet, the loop is defined and performed. Since this is the main.ino code, it controls the core functioning of the model - used to run the libraries in the model.

    void loop() {
    // Get image from provider.
    if (kTfLiteOk !=GetImage(error_reporter, kNumCols, kNumRows, kNumChannels,
    input->data.uint8)) {
    TF_LITE_REPORT_ERROR(error_reporter, "Image capture failed.");
    }

    // Run the model on this input and make sure it succeeds.
    if (kTfLiteOk !=interpreter->Invoke()) {
    TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed.");
    }

    TfLiteTensor* output =interpreter->output(0);

    // Process the inference results.
    uint8_t person_score =output->data.uint8[kPersonIndex];
    uint8_t no_person_score =output->data.uint8[kNotAPersonIndex];
    RespondToDetection(error_reporter, person_score, no_person_score);
    }

    In the following code snippet, the necessary libraries required to inference the image to be captured is displayed. The images after captured are converted to a 96*96 standardised size which can be interpreted on the arduino board.

    Here, the Arducam mini 2mp OV2640 library has been utilised.

    This code has been provided in the arduino_image_provider.cpp snippet

    #if defined(ARDUINO) &&!defined(ARDUINO_ARDUINO_NANO33BLE)
    #define ARDUINO_EXCLUDE_CODE
    #endif // defined(ARDUINO) &&!defined(ARDUINO_ARDUINO_NANO33BLE)

    #ifndef ARDUINO_EXCLUDE_CODE

    // Required by Arducam library
    #include
    #include
    #include
    // Arducam library
    #include
    // JPEGDecoder library
    #include

    // Checks that the Arducam library has been correctly configured
    #if !(defined OV2640_MINI_2MP_PLUS)
    #error Please select the hardware platform and camera module in the Arduino/libraries/ArduCAM/memorysaver.h
    #endif

    // The size of our temporary buffer for holding
    // JPEG data received from the Arducam module
    #define MAX_JPEG_BYTES 4096
    // The pin connected to the Arducam Chip Select
    #define CS 7

    // Camera library instance
    ArduCAM myCAM(OV2640, CS);
    // Temporary buffer for holding JPEG data from camera
    uint8_t jpeg_buffer[MAX_JPEG_BYTES] ={0};
    // Length of the JPEG data currently in the buffer
    uint32_t jpeg_length =0;

    // Get the camera module ready
    TfLiteStatus InitCamera(tflite::ErrorReporter* error_reporter) {
    TF_LITE_REPORT_ERROR(error_reporter, "Attempting to start Arducam");
    // Enable the Wire library
    Wire.begin();
    // Configure the CS pin
    pinMode(CS, OUTPUT);
    digitalWrite(CS, HIGH);
    // initialize SPI
    SPI.begin();
    // Reset the CPLD
    myCAM.write_reg(0x07, 0x80);
    delay(100);
    myCAM.write_reg(0x07, 0x00);
    delay(100);
    // Test whether we can communicate with Arducam via SPI
    myCAM.write_reg(ARDUCHIP_TEST1, 0x55);
    uint8_t test;
    test =myCAM.read_reg(ARDUCHIP_TEST1);
    if (test !=0x55) {
    TF_LITE_REPORT_ERROR(error_reporter, "Can't communicate with Arducam");
    delay(1000);
    return kTfLiteError;
    }

    The final part where in the complete model is controlled is the Arduino_detection_responder.cpp.

    This is a small code snippet of the entire logic used. When the confidence score of a person is greater than the confidence score of no person, the gate is opened and it is assumed that the person is detected. For this purpose the servo is moved to 0Degree to open the gate. On the detection of a person, the count is incremented by 1 =initially which began at 0. This count indicates the number of people coming inside. The num value of the count is sent to the ThingSpeak IoT dashboard which represents the number of people entering. When the count reaches the value of 50; the gate is closed and a wait time of 15min is imposed on the queue. The gate is closed and wait time is imposed each time 50 people enter. For this a logic of multiples of 50 is set

    // Switch on the green LED when a person is detected,
    // the red when no person is detected
    if (person_score> no_person_score) {
    digitalWrite(LEDG, LOW);
    digitalWrite(LEDR, HIGH);
    servo_8.write(0); // this servo moves to 0degrees in order to open the mall door when a person is detected to ensure no touch entry system
    count++

    } else {
    digitalWrite(LEDG, HIGH);
    digitalWrite(LEDR, LOW);
    servo_8.write(180); // this servo moves to 180degrees when no person is detected
    }

    TF_LITE_REPORT_ERROR(error_reporter, "Person score:%d No person score:%d",
    person_score, no_person_score);
    }


    // Now we have let in 50people inside the store, so we set a delay of 15min wait time for others waiting outside to let them in
    // Displaying wait time on the screen every 1min

    if(count =y*50) // when people are detected in multiples of 50, we instruct it to start displaying wait time to other people to wait until 15 min
    myDisplay.setTextAlignment(PA_CENTER);
    myDisplay.print("Waiting 15min");
    delay(60000);

    Setting up the ThingSpeak Dashboard:

    Since the features in Thingspeak Dashboard are limited, I will not be implementing the Time prediction algorithm right now but I am working on the logic to communicate and write data from the Dashboard to microcontroller to perform the time calculation algorithm.

    In the ThingSpeak Dashboard, I have added two fields; one for entry and the other one for exit.

    The co-ordinates for the store or mall for which the queuing system is displayed, is also added in the form of a map.

    The data displayed for the first field is gathered through the entry responding logic and the data displayed for the second field is gathered through the exit responding logic.

    This is the snip of the two different logics used in the model.

    The ThingSpeak Dashboard can be made available to the the staff of the store to check the number of people entering the store in real time and the number of people exiting the store in real time. This data can also be observed to see the analysis of the data at a given day at a a given time to check and impose further restrictions if required if the limit of people in the store exceeds the expected number of people at a given day

    This Dashboard can be viewed here:IoT Dashboard

    The following represents the field created for this purpose.

    Now, a question might arise that for person detection model that this model can be replaced by ultrasonic or infrared sensors. Flaws in Ultrasonic Sensors or Infrared Based Sensors:These sensors are not exactly accurate and for the real time person count display, these sensors may provide wrong readings. Also, these sensors add as additional hardwares while in Go to Market Solutions, the person detection algorithm can be implemented in existing cameras and can reduce hardware cost. The data from these cameras could be sent to the Arduino BLE sense for central Classification and data processing.

    GO TO MARKET &PRACTICALITY:

    <울>
  • Malls and Supermarkets can use this to identify The count of people entering and exiting the mall in real time.
  • Implement Strategies using this data to ensure Safety and Compliance with efficient Queue management algorithms.
  • Decrease Labour and automate Queue Management Process
  • Offer Dashboard to the visitors to monitor the density of people inside the mall and accordingly visit the mall at the safest point of time.
  • This product can be used to ensure the visitors that the mall is a safe place and hence, can increase the sales and visits following Government guidelines
  • Companies offering Ai and IoT based solutions can invest for mass production and distribution.
  • The more the supermarkets using this product, the more the access to data to the government and more the choice to customers to select the preferable safest place in their locality along with the the queue time required for each store can be monitored. This will lead to a wide range of options of supermarkets in the locality comparing the queue time and safety.
  • Comparatively affordable solution as compared to manual queuing system and updating information manually to the Dashboard.
  • Utilize real-time CCTV footage to impose Queue management in a mall/shop through person detection in terms of timely trends and spatial analysis of person density in the mall.
  • Enable Stores to make better, data-driven decisions that ensure your safety and efficient Queues based on autonomous queuing system.
  • Github Code:Arduino Autonomous TinyML and IoT based queuing system.

    Addition to the Existing Person Detection Algorithm- Mask Detection System:

    Mask Detection Model based on TinyML :

    Dr. Kierstin Kennedy, chief of hospital medicine at the University of Alabama at Birmingham, said, “Masks can protect against any infectious illness that may be spread by droplets. For example, the flu, pertussis (whooping cough), or pneumonia.”

    Adding that wearing a cloth mask has benefits beyond slowing the spread of COVID-19, and that source control can reduce the transmission of many other easily spread respiratory infections — the kind that typically render people infectious even before they display symptoms, like influenza.

    Until the threat of this pandemic has been neutralized, people should embrace the protection masks allow them to provide to those around them.

    After all, it’s not necessarily about you — it’s about everyone you come in contact with.

    It’s not at all uncommon to be an asymptomatic carrier of the new coronavirus — which means that even if you have no symptoms at all, you could potentially transmit the virus to someone who could then become gravely ill or even die.

    Adhering to this, I decided to increase the necessity of wearing face-masks along with touch-free systems to increase safety in malls and supermarkets. Along with the person detection algorithm, I decided to make a custom face mask detection model which detects face masks and displays this data on the ThingSpeak IoT dashboard to increase awareness among mall staff as well as the visitors coming inside so that they are aware of the time trends when the most number of people are without masks. Through this there is a increases of sense of warning and awareness in people to wear masks. Accordingly, the store staff can keep a monitor on these trends and increase restrictions based on data driven statistics.

    Deciding upon the Logic and Dataset of the Model:

    This is an overall Logic used in most of face mask detection algorithms. Since, we are deploying this model to an Arduino 33 BLE Sense, the deployment process of this model will vary.

    There are two steps involved in constructing the model for Face Mask Detection.

    <울>
  • Training: Here we’ll focus on loading our face mask detection dataset from disk, training a model (using Keras/TensorFlow) on this dataset, and then serializing the face mask detector to disk
  • Deployment: Once the face mask detector is trained, we can then move on to loading the mask detector, performing face detection, and then classifying each face as mask or no_mask
  • Dataset used in training this model:

    The dataset used in this process consists of 3500 images but to reduce the size of the model, and feed in accurate model images, I have used 813 images to increase accuracy of the model by decreasing bulk size. This model is an average 676K in size and utilizes nearly 440.3K Ram. Since this model is the optimized version of the original model, the accuracy of the model is 87.47% as compared to 98.15% in the non-optimized one.

    The following Softwares have been used in designing this model:

    <울>
  • TensorFlow lite
  • ThingSpeak
  • Arduino Web Editor
  • Heading towards designing the model in EdgeImpulse Studio:

    Powerful deep learning models (based on artificial neural networks) are now reaching microcontrollers. Over the past year great strides were made in making deep learning models smaller, faster and runnable on embedded hardware through projects like TensorFlow Lite for Microcontrollers, uTensor and Arm’s CMSIS-NN; but building a quality dataset, extracting the right features, training and deploying these models is can still be complicated.

    Using Edge Impulse you can now quickly collect real-world sensor data, train ML models on this data in the cloud, and then deploy the model back to your Arduino device. From there you can integrate the model into your Arduino sketches with a single function call.

    Step 1 - Acquisition of Data in the Edge Impulse Studio:

    Using the dataset of 3500 images, I filtered these images to the best performing images and finally fed in 813 images totally in the Training Data and 487 images in the testing Data. I labelled these classes as mask and no_mask.

    Then, I went ahead creating an impulse design which best suited the Model type. For optimal accuracy its recommended to use a standard image size which is 96*96 and also works the best on the Arduino 33 BLE Sense. Since the input type was images, I went ahead and selected "images" in the processing block. For the transfer learning block, the type recommended for image learning is Transfer Learning (Images) which is a Fine tune a pre-trained image model on your data. with Good performance even with relatively small image datasets.

    The next step was saving the parameters based on color depth. Here, I have selected RGB because in the dataset I am using for mask detection, color is also an important feature of classification instead of grayscale. In this page, we can also see the raw features with the processed features of the image

    After Feature Generation I obtained the classification graph or the feature explorer where I could see the classes based on their classifications. The blue dots represent mask images and the orange dots represent the no_mask images.

    In this feature generation, I obtained a fair classification with a distinct classification plot.

    Finally, Moving on to the Transfer Learning Plot:

    Here, I set the number of training cycles/epochs as 30 to get the highest accuracy with minimum val_loss. It so happens that if we train the model based on many training cycles, the accuracy graph starts to decrement after a certain number of epochs and the val_loss increases. Therefore I decided to limit the epochs to 30 which proved to be perfect. The learning rate is set to 0.0005 which is the default and proves to be the most appropriate. Here, I have used the MobileNetV2 0.35 (final layer:16 neurons, 0.1 dropout) model because this model is comparitively lightweight and accurate.

    Finally after completing 30 epochs and 10 epochs of best model performance, I got the accuracy heading to 1.00 and the loss nearly 0 which was 0.0011. The following was the ouput during the training process:

    Saving best performing model... Converting TensorFlow Lite float32 model... Converting TensorFlow Lite int8 quantized model with float32 input and output...

    <울>
  • Epoch 9/10 21/21 - 5s - loss:0.0011 - accuracy:1.0000 - val_loss:0.0727 - val_accuracy:0.9755
  • Epoch 10/10 21/21 - 5s - loss:0.0012 - accuracy:1.0000 - val_loss:0.0728 - val_accuracy:0.9755 Finished training
  • This was the final output which I received after the training:

    The accuracy to be 92.6% and the loss to be 0.19.

    Finally Using the test data, I tested the accuracy and found it to be 98.15%.

    Finally since I had the model ready, I deployed it as an Arduino Library with the firmware to be the Arduino 33 BLE Sense:I got the zip folder of the library ready and started to make changes as per our requirements.

    For a last confirmation, I live classified the data to ensure that the data is classified properly. I got the perfect results as expected:

    Changing the code as per the output required in the model:

    Here is a snippet of the main.ino code of the mask_detection model

    I have defined the arduino libraries required for the functioning of the model and based on the Tensorflow lite framework, I have designed this model.

    This is the loop of some of the main functions in the model which are defined in the libraries. The main.ino code centralises these functions and accordingly loops them with a central code.

    void loop() {
    // Get image from provider.
    if (kTfLiteOk !=GetImage(error_reporter, kNumCols, kNumRows, kNumChannels,
    input->data.uint8)) {
    TF_LITE_REPORT_ERROR(error_reporter, "Image capture failed.");
    }

    // Run the model on this input and make sure it succeeds.
    if (kTfLiteOk !=interpreter->Invoke()) {
    TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed.");
    }

    TfLiteTensor* output =interpreter->output(0);

    // Process the inference results.
    uint8_t mask_score =output->data.uint8[kmaskIndex];
    uint8_t no_mask_score =output->data.uint8[kno-maskIndex];
    RespondToDetection(error_reporter, mask_score, no_mask_score);
    }

    The processed data of this model can be viewed here :(This file is relatively large and varies as per the dataset size of the model) Arduino_mask_detect_model_data.h

    For providing image, I will be using the Arducam mini 2mp plus for visual data input. A snippet from the image_provider.h file is :

    #include "image_provider.h"

    /*
    * The sample requires the following third-party libraries to be installed and
    * configured:
    *
    * Arducam
    * -------
    * 1. Download https://github.com/ArduCAM/Arduino and copy its `ArduCAM`
    * subdirectory into `Arduino/libraries`. Commit #e216049 has been tested
    * with this code.
    * 2. Edit `Arduino/libraries/ArduCAM/memorysaver.h` and ensure that
    * "#define OV2640_MINI_2MP_PLUS" is not commented out. Ensure all other
    * defines in the same section are commented out.
    *
    * JPEGDecoder
    * -----------
    * 1. Install "JPEGDecoder" 1.8.0 from the Arduino library manager.
    * 2. Edit "Arduino/Libraries/JPEGDecoder/src/User_Config.h" and comment out
    * "#define LOAD_SD_LIBRARY" and "#define LOAD_SDFAT_LIBRARY".
    */

    #if defined(ARDUINO) &&!defined(ARDUINO_ARDUINO_NANO33BLE)
    #define ARDUINO_EXCLUDE_CODE
    #endif // defined(ARDUINO) &&!defined(ARDUINO_ARDUINO_NANO33BLE)

    #ifndef ARDUINO_EXCLUDE_CODE

    // Required by Arducam library
    #include
    #include
    #include
    // Arducam library
    #include
    // JPEGDecoder library
    #include

    // Checks that the Arducam library has been correctly configured
    #if !(defined OV2640_MINI_2MP_PLUS)
    #error Please select the hardware platform and camera module in the Arduino/libraries/ArduCAM/memorysaver.h
    #endif

    The Arduino_detection_responder.cpp code performs the inference and delivers the main output required for the code. Here, when a person with a mask is detected, we are opening the gate and if a person with no mask is detected, we are closing the gate.

    // Switch on the green LED when a mask is detected,
    // the red when no mask is detected
    if (mask_score> no_mask_score) {
    digitalWrite(LEDG, LOW);
    digitalWrite(LEDR, HIGH);
    servo_8.write(0); // this servo moves to 0degrees in order to open the mall door when a person is detected to ensure no touch entry system
    count++
    } else {
    digitalWrite(LEDG, HIGH);
    digitalWrite(LEDR, LOW);
    servo_8.write(180); // this servo moves to 180degrees when no person is detected
    count2++
    }

    The count and count2 are variable integers and increment everytime a mask is detected, or not detected. These counts are then displayed onto the ThingSpeak IoT Dashboard as follows:

    The graph displays the mask and no mask count over time. This thingspeak dashboard is setup in the Arduino code as seen here:

    #include 
    #include
    WiFiMulti WiFiMulti;
    const char* ssid ="Yourssid"; // Your SSID (Name of your WiFi)
    const char* password ="Wifipass"; //Your Wifi password
    const char* host ="api.thingspeak.com/channels/1118220";
    String api_key ="9BRPKINQJJT2WMWP"; // Your API Key provied by thingspeak

    The complete code for responder can be viewed here:Arduino_detection_responder.cpp

    Github Code can be viewed here:Arduino Mask Detetction

    The ThingSpeak dashboard can be viewed here - IoT Dashboard

    5th project - Person detection in supermarket Aisles and self-sanitisation system:

    Just as malls and supermarkets and retail sectors have opened, a risk of contamination of the virus has increased. The main risk that is faced is in supermarkets and stores. The stores could be related to clothes, food and even electronics, the virus could deposit and stay on these surfaces for a long time.

    Early data suggests the new coronavirus can live on surfaces for several days

    The new coronavirus is, well, new — and there's still much to learn about how easily the virus can spread via contaminated surfaces. But early evidence indicates that the surface survivability of the new coronavirus is similar to that of SARS, a related coronavirus first identified in 2002. Depending on the surface, the virus can live on surfaces for a few hours or up to several days.

    The new coronavirus seems to be able to survive the longest on plastic and stainless steel — potentially as long as three days on these surfaces. It can also live on cardboard for up to 24 hours.

    Find out what this means for the things you touch throughout the day, including your:

    <울>
  • Clothes
  • Food
  • Groceries
  • Packages
  • Electronics
  • The coronavirus pandemic has breathed new life into a decades-old technique that can zap viruses and bacteria:ultraviolet light.

    Hospitals have been using it for years to cut down on the spread of drug-resistant superbugs and to disinfect surgical suites. But there is now interest in using the technology in spaces like schools, office buildings, and restaurants to help reduce coronavirus transmission once public spaces are open again.

    The sanitizing effects of UV lights have been seen with other coronaviruses, including the one that causes severe acute respiratory syndrome (SARS). Studies have shown that it can be used against other coronaviruses. One study found at least 15 minutes of UVC exposure inactivated SARS, making it impossible for the virus to replicate. New York's Metropolitan Transit Authority announced the use of UV light on subway cars, buses, technology centers, and offices. The National Academy of Sciences says although there is no concrete evidence for UV’s effectiveness on the virus that causes COVID-19, it has worked on other similar viruses, so it would likely fight this one too.

    Abiding by the given information, Malls and Supermarkets have started imposing solutions based on UV sanitisation on manual basis. This consumes a lot of time and also, the data is unavailable. In manual process, one does not know which area has been exposed to human touch more or which are has the highest risk of contamination.

    To automate this process and to provide real time spatial analysis of the data collected and the rate of contamination in an area.

    The above images demonstrate the distribution of population across the mall. As it is clearly seen in Figure B, the density near the escalator is comparatively high as compared to the density on the second floor. Manually keeping a track of this continuously changing data in real time and accordingly, sanitising the areas/aisles according to the rate of contamination is not a manual task.

    Hence, I decided to make a robust solution for automating this process with the help of TinyML computer Vision &IoT deployed on the Arduino 33 BLE Sense.

    The structured framework of this system is as follows:

    These models are individually deployed on the Aisles and Areas of the malls and supermarkets close to the Area which has to be sanitised. Since the model utilises person detection Algorithm, the placement of the model is done accordingly.

    In this model, for accumulation of Video data, the Arducam mini 2MP Plus is used. The Arducam Mini 2MP Plus continuously gathers visual data and sends this data to the Arduino Nano 33 BLE Sense for processing and classification. The Arduino classifies the data and accordingly processes commands. If a person is detected, the count of people increments by 1 on detection of each person near the Aisle.

    A certain threshold for the count of people is set depending upon the rate of contamination based on the object type in that area. The rate of contamination and threshold for food aisle, clothes aisle, sports aisle and Electronic aisle is different.

    Accordingly taking into consideration an average threshold, if 25 people are detected, the area is safe. If the count increases to 50, the area is heading towards contamination and the visitors are given an alert. If the count increases to 100, the area is declared contaminated and the people are warned to be careful while touching objects. Finally if the threshold limit is crosses which is 150, the area is autonomously sanitised using UV Light.

    The alerts generated are based on LED Colours. Green is an indication for safe, blue is an indication for alert, and Red is an indication for Warning!. These LED based alerts are installed in the respective areas individually.

    The above image shows the various aisles in the malls where the system can be installed.

    The above image shows the area of installation of Arducam or similar visual data capturing device to cover a wide spectrum of people walking through the aisle.

    The above Images show a demo implementation of the UV Sanitisation Lights in suitable areas of the aisles. This UV light covers a large spectrum of area that can be sanitised together.

    Implementation of the autonomous sanitisation system.

    The following Softwares have been used in designing this model:

    <울>
  • TensorFlow lite
  • ThingSpeak
  • Arduino Web Editor
  • In this person detection model, I have used the Pre-trained TensorFlow Person detection model apt for the project. This pre-trained model consists of three classes out of which the third class is with undefined set of data:

    "unused",

    "person",

    "notperson"

    In our model we have the Arducam Mini 2mp plus to carry out image intake and this image data with a decent rate of fps is sent to the Arduino Nano 33 BLE Sense for processing and and classification. Since the Microcontroller is capable of providing 256kb RAM, we change the image size of each image to a standard 96*96 for processing and classification. The Arduino Tensorflow Lite network consists of a deep learning framework as:

    <울>
  • Depthwise Conv_2D
  • Conv_2D
  • AVERAGE Pool_2D
  • Flatten layer
  • This deep learning framework is used to train the Person detection model.

    The following is the most important function defined while processing outputs on the Microcontroller via Arduino_detetction_responder.cpp

    // Process the inference results.
    uint8_t person_score =output->data.uint8[kPersonIndex];
    uint8_t no_person_score =output->data.uint8[kNotAPersonIndex];
    RespondToDetection(error_reporter, person_score, no_person_score);

    In the following function defining, the person_score , the no_person_score have been defined on the rate of classification of the data.

    using these defined functions, I will be using it to give certain outputs on the basis of confidence of the person_score and the no_person_score .

    The detection responder logic of the code works in the following way:

    ├── Person Detection and sanitisation
    ├── Arducam mini 2MP Plus
    │ ├──Image and Video Data to Arduino
    ├── Arduino BLE 33 Sense
    │ ├── processing and classification of the input data
    │ │ ├── If person detected, increment the count by 1
    │ │ ├── If no person detected, do nothing
    │ │ ├── Send the number of people entered count to to ThingSpeak Dashboard via ESP8266 -01
    │ │ | ├── If people count is uptil 25
    │ │ │ | ├── Indicate the area to be safe by flashing green light

    │ │ | ├── If people count is uptil 50
    │ │ │ | ├── Indicate the people to be aware by flashing blue light

    │ │ | ├── If people count is uptil 100
    │ │ │ | ├── Indicate the area to be contaminated by flashing Red light

    │ │ | ├── If people count is between 150 to 175
    │ │ │ | ├── Sanitise the area by activating the UV light

    │ │ │ | ├── Reset the person count to 0 since the area has been sanitised
    │ │ └── ...Repeat the loop

    According to the logic used in the model, the Arducam Mini 2MP Plus will continuously capture visual data and send this data to the Arduino Nano 33 BLE sense to process and classify. This model size is 125Kb. Once the image is processed, the Arduino starts to classify the data captured. If a person is detected in the model, the person count increases by 1. This person count is continuously sent to the ThingSpeak dashboard via ESP8266- 01 IoT module. The person count enables the supermarket staff with the data of number of visitors in a particular area at any given point of time. The staff can generate data driven decisions to take action in an area if the visitor count is significantly high.

    Proceeding to the output, if the person count is between 1 to 25, the Area is declared safe by flashing the green LED. If the person count is between 26 to 50, the visitors are given an alert by flashing BLUE LED. If the count is between 51 to 100, the area is declared contaminated by flashing the RED LED. When the count surpasses a certain threshold, here taken to be 150, the UV light is turned on until the 165th person passes the area. In this way the Area is sanitised. Similarly the count of people is reset to 0 since the area has been sanitised.

    Working of the Firmware:

    This model comprises of the following firmware used:

    <울>
  • Arduino 33 BLE sense - Used to process the data gathered, classifies the data processes, sends the command according to the logic fed.
  • Arducam Mini 2mp plus - Continuous Raw data image accumulation from source.
  • Adafruit lithium ion charger - Used to deliver charge through the lithium battery
  • Lithium ion Battery - power source
  • ESP8266 - 01 - Used for sending data to the ThingSpeak dashboard via WiFi network.
  • RGB LED - Used for flashing Signals based on the status of contamination
  • UV Light - Used to Sanitise the Area ( Since this is a prototype, an LED is displayed. In the actual solution, the UV light consumes a lot of energy hence an external power source needs to be provided. )
  • Functioning and Working of Logic in Code:

    The following are the Libraries included in themain.ino c ode for functioning of the model.

    #include 

    #include "main_functions.h"

    #include "detection_responder.h"
    #include "image_provider.h"
    #include "model_settings.h"
    #include "person_detect_model_data.h"
    #include "tensorflow/lite/micro/kernels/micro_ops.h"
    #include "tensorflow/lite/micro/micro_error_reporter.h"
    #include "tensorflow/lite/micro/micro_interpreter.h"
    #include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
    #include "tensorflow/lite/schema/schema_generated.h"
    #include "tensorflow/lite/version.h"

    The main outcome of the model is dependent on the logic fed in detection_responder.cpp. This snippet shows the generalized threshold of the contamination of the object set to a count of 101.

    void RespondToDetection(tflite::ErrorReporter* error_reporter,
    uint8_t person_score, uint8_t no_person_score) {
    static bool is_initialized =false;
    if (!is_initialized) {
    // Pins for the built-in RGB LEDs on the Arduino Nano 33 BLE Sense
    pinMode(LEDR, OUTPUT);
    pinMode(LEDG, OUTPUT);
    pinMode(LEDB, OUTPUT);
    is_initialized =true;
    }

    // Note:The RGB LEDs on the Arduino Nano 33 BLE
    // Sense are on when the pin is LOW, off when HIGH.

    // Switch the person/not person LEDs off
    digitalWrite(LEDG, HIGH);
    digitalWrite(LEDR, HIGH);

    // Flash the blue LED after every inference.
    digitalWrite(LEDB, LOW);
    delay(100);
    digitalWrite(LEDB, HIGH);

    // Switch on the green LED when a person is detected,
    // the red when no person is detected
    if (person_score> no_person_score) {
    digitalWrite(LEDG, LOW);
    digitalWrite(LEDR, HIGH);
    count++
    } else {
    digitalWrite(LEDG, HIGH);
    digitalWrite(LEDR, LOW);
    }

    TF_LITE_REPORT_ERROR(error_reporter, "Person score:%d No person score:%d",
    person_score, no_person_score);
    }


    if (constrain(count, 1, 50)) {
    digitalWrite(LEDGREEN, LOW); // This is a green LED which means area is not contaminated yet
    }
    if (constrain(count, 51, 75)) {
    digitalWrite(LEDBLUE, LOW); // This is a b led so that people are alert that more than 50 people
    }
    if (constrain(count, 76,100)) {
    digitalWrite(LEDRED, LOW); // This is a red LED which warns people that this area is contaminated and one should take care while touching objects
    }
    if (constrain(count, 101, 110)) {
    digitalWrite(UV, LOW); // This is an Ultraviolet Light which ensures that after 100 people have touched the aisle objects, it sanitises the area with UV light
    }
    if (count> 111) {
    (count =0); // this resets the person count to 0 once the area has been sanitised by UV light
    }
    } // In this way, a simple math function has been written by me on the arduino nano 33 ble sense which alerts people and sanitizes areas
    /* cc - Dhruv Sheth */

    The below snippet is carried out by the ESP8266 -01 module to send data to the allocated field in the ThingSpeak Dashboard:

    void Send_Data()
    {


    // Use WiFiClient class to create TCP connections
    WiFiClient client;

    const int httpPort =80;

    if (!client.connect(host, httpPort)) {
    Serial.println("connection failed");
    return;
    }
    else
    {
    String data_to_send =api_key;
    data_to_send +="&field1=";
    data_to_send +=String(count);
    data_to_send +="\r\n\";

    client.print("POST /update HTTP/1.1\n");
    client.print("Host:api.thingspeak.com\n");
    client.print("Connection:close\n");
    client.print("X-THINGSPEAKAPIKEY:" + api_key + "\n");
    client.print("Content-Type:application/x-www-form-urlencoded\n");
    client.print("Content-Length:");
    client.print(data_to_send.length());
    client.print("\n\n");
    client.print(data_to_send);

    delay(10); // reduced delay to perform real time data collection
    }

    client.stop();

    }

    The threshold for each Aisle inside the mall is set to be different according to the rate of contamination of the model. This data can be altered according to the need of the malls or supermarkets. These thresholds can also be altered according to the time trends and the population density in that area.

    Threshold set for the Clothes Aisle:

    if (constrain(count, 1, 50)) {
    digitalWrite(LEDGREEN, LOW); // This is a green LED which means area is not contaminated yet
    }
    if (constrain(count, 51, 75)) {
    digitalWrite(LEDBLUE, LOW); // This is a b led so that people are alert that more than 50 people
    }
    if (constrain(count, 76,100)) {
    digitalWrite(LEDRED, LOW); // This is a red LED which warns people that this area is contaminated and one should take care while touching objects
    }
    if (constrain(count, 101, 110)) {
    digitalWrite(UV, LOW); // This is an Ultraviolet Light which ensures that after 100 people have touched the aisle objects, it sanitises the area with UV light
    }
    if (count> 111) {
    (count =0); // this resets the person count to 0 once the area has been sanitised by UV light
    }

    Threshold for the Electronics Aisle, since the virus stays on these surfaces comparatively less, we have set a higher threshold in this case.

    if (constrain(count, 1, 75)) {
    digitalWrite(LEDGREEN, LOW); // This is a green LED which means area is not contaminated yet
    }
    if (constrain(count, 76, 180)) {
    digitalWrite(LEDBLUE, LOW); // This is a b led so that people are alert that more than 50 people
    }
    if (constrain(count, 181,225)) {
    digitalWrite(LEDRED, LOW); // This is a red LED which warns people that this area is contaminated and one should take care while touching objects
    }
    if (constrain(count, 225, 240)) {
    digitalWrite(UV, LOW); // This is an Ultraviolet Light which ensures that after 100 people have touched the aisle objects, it sanitises the area with UV light
    }
    if (count> 111) {
    (count =0); // this resets the person count to 0 once the area has been sanitised by UV light
    }

    Threshold set for Food and eatables:, Since these are consumptives, they face a higher risk of contamination. Hence, a lower threshold is set in this case.

    if (constrain(count, 1, 25)) {
    digitalWrite(LEDGREEN, LOW); // This is a green LED which means area is not contaminated yet
    }
    if (constrain(count, 25, 50)) {
    digitalWrite(LEDBLUE, LOW); // This is a b led so that people are alert that more than 50 people
    }
    if (constrain(count, 51,75)) {
    digitalWrite(LEDRED, LOW); // This is a red LED which warns people that this area is contaminated and one should take care while touching objects
    }
    if (constrain(count, 76, 90)) {
    digitalWrite(UV, LOW); // This is an Ultraviolet Light which ensures that after 100 people have touched the aisle objects, it sanitises the area with UV light
    }
    if (count> 91) {
    (count =0); // this resets the person count to 0 once the area has been sanitised by UV light
    }

    Setting up the Thingspeak IoT Dashboard:

    On the thingspeak dashboard, there are 4 fields which have been added for the 4 aisles that have been set up namely:

    <울>
  • Food-aisle
  • Sports-aisle
  • Clothes-aisle
  • Electronics-aisle
  • The co-ordinate locations of the mall can be seen in the channel visualization which can be cumulatively displayed on a single dashboard through which the data for each supermarket and mall in a locality can be accessed.

    The IoT Dashboard can be viewed here:IoT Dashboard

    The fields and channels of aisles can be all displayed together on a single Thingspeak IoT Dashboard.

    The above image shows a density plot of the density population in a given region in the store of a mall.

    Currently I have been using the ThingSpeak Dashboard to plot the data in a graphical format. But, using the same data and Tableau visualizations, it is possible to plot the given data on the floor plot map of the Supermarket.

    This data will be easily accessible to other visitors entering the supermarket and will be able to interpret data driven decisions like visiting the store which has a low population density.

    Now a question might arise that what makes the product unique:

    Currently, the UV sanitisation system is implemented on manual basis and hence the sanitisation system is carried out manually monitoring these areas. Autonomous UV robots are deployed in malls to gather plot data of mall and map these areas, hence autonomously sanitising these areas. These solutions are not capable of sanitising each aisle and area in the mall along with the flowing population of the people in the mall. The sanitisation is only limited to sanitising the lower part of these aisles and hence the upper parts are left unsanitised. These solutions are also comparatively expensive and they are not capable of continuous data monitoring and sanitisation respective to the rate of contamination. This means that they are unable to monitor the number of people who have passes through a certain area or aisle and hence are unable to sanitise areas according to their contamination.

    Go to Market and Viability:

    <울>
  • Malls and Supermarkets can use this to identify The count of people in and density of people in a certain area in a store and impose self sanitisation process within the malls.
  • Implement Strategies using this data to ensure Safety and Compliance with efficient Population Density monitoring algorithms.
  • Decrease Labour and automate Sanitisation process.
  • Offer Dashboard to the visitors to monitor the density of people at a certain area inside the mall and accordingly the visitors can have a view of the store with lowest density population and make decisions based on this data. The visitors can also have a view of the timely trends of sanitisation by viewing the abrupt change of person density graph in an area when sanitisation is conducted. ( Since, each time sanitisation in an area takes place, the person count is reset to 0 because the place is sanitised)
  • <울>
  • This product can be used to ensure the visitors that the mall is a safe place and hence, can increase the sales and visits following Government guidelines
  • Companies offering Ai and IoT based solutions can invest for mass production and distribution.
  • The more the supermarkets using this product, the more the access to data to the government and more the choice to customers to select the preferable safest place in their locality and which store is continuously sanitised to be up to mark in terms of safety. This will lead to a wide range of options of supermarkets in the locality comparing the queue time and safety.
  • Comparatively affordable solution as compared to autonomous UV Robot system and highly scalable in terms of data and services with IoT Dashboard provided.
  • Utilize real-time CCTV footage to impose autonomous sanitisation system through person detection in terms of timely trends and spatial analysis of person density in the mall.
  • Enable Stores to make better, data-driven decisions that ensure your safety and efficient Queues based on autonomous queuing system.
  • Github Link :https://github.com/dhruvsheth-ai/self-sanitisation-person-detection

    PROJECT VIDEO:

    Thank you for viewing my project!

    <섹션 클래스="섹션 컨테이너 섹션 축소 가능" id="코드">

    코드

    <울>
  • Elevator Automation using "up" - "down" speech command
  • Elevator Automation using "up" - "down" speech commandArduino
    The below is a .zip file of the Arduino Library
    No preview (download only).
    Mall Aisle Self Sanitization with Person Detection System
    https://github.com/dhruvsheth-ai/self-sanitisation-person-detetction
    Temperature monitoring system based on IoT
    https://github.com/dhruvsheth-ai/temperature-arduino-iot
    Mask Detection Algorithm
    https://github.com/dhruvsheth-ai/ble-mask-detection-optimised
    Autonomous Person Intercom based on TinyML
    https://github.com/dhruvsheth-ai/person-autonomous-intercom
    Elevator Automation using TinyML
    https://github.com/dhruvsheth-ai/elevator-Automation-ARDUINO
    Mall Entrance Person Detection and Queuing System
    Queuing System based on TinyML person detection on Mall Entrance and exit connected via IoT Thingspeak dashboard for Live data monitoringhttps://github.com/dhruvsheth-ai/Person-queuing-system-arduino33

    회로도

    This is the Schematic for the Mall Aisle Person detection and self sanitization system that transfers data to ThingSpeak IoT Dashboard Detects a person present at the door and rings the bell and displays "person" on the LED Matrix as an alternative

    제조공정

    1. DHT11 센서를 연결하는 라즈베리 PI 기반 IoT 프로젝트
    2. IoT 기반 자산 추적 솔루션용 무선 프로토콜
    3. Arduino Pong 게임 - OLED 디스플레이
    4. IoT를 사용한 심박수 모니터
    5. Pixie:Arduino 기반 NeoPixel 손목시계
    6. Nextion 디스플레이로 재생
    7. TinyML로 식물 상태 확인
    8. Arduino 및 OLED 기반 Cellular Automata
    9. Azure IoT 수영장
    10. 개방형 왜건 트럭을 위한 IoT 기반 솔루션