How can I deal with the miscommunication between people and current weather forecast system by designing new way of weather perception? AWA is a small audio-visual device which depicts different kinds of weather data.
When I came to New York City, due to the fluctuating weather status, I used to fail my dressing more than half in a week, even though I was checking daily weather everyday. With this thought, I did a survey in order to know other people’s general perception and actual weather data consumption. I then realized there is a clear gap between people’s perception of the weather and the real data.
As a result of survey, I realized that half of the people are sensitive about not only highest/lowest temperature but also moisture, air quality and wind. But only 10% of the people were using the actual data when it comes to weather application. Also, general modern citizens historically have been using alarming clock and it has been completely moved to mobile application.
However, my advisor Tom Igoe seriously said I have to consider the skeptical aspects of IoT product, which means that many start-ups have been failed without sufficient user research. With this comment, he presented me a new approach with the importance of ethnographic research –
the alarm clock is a similar information appliance in the home, historically. Alarm clock sales have dipped as more people now use their phone as an alarm clock. many people check the weather on their phones or tablets before dressing. Many use their phones as alarm clocks. An app to play the sound on the phone is a feasible approach to this project. The phone’s alarm sound could be weather dependent, so that if you use your phone to wake up, the alarm sound would automatically tell you the weather.
If you are serous about IoT, then be very skeptical of it as well. Every new device you build has costs in time, materials, environmental impact, and learning for the end user, and when the application can be served by an existing one that is already part of the user’s routine, that is often the more feasible and popular solution.
I was struck by this comment😦 and realized I didn’t conduct enough user research and deep level of consideration. However, I already bought some hardware at that time so that I asked Tom to be excused and decided making web version after finishing IoT version.
Hardware configuration & breadboard prototyping
For hardware configuration, I tested a couple of MCU first; Arduino MKR1000, Arduino Yun and Huzzah ESP8266. Arduino Yun was too slow to upload each code and MKR 1000 was great but had a problem with controlling VS1053 with software serial. I finally found out the ESP8266 huzzah was what I am looking for, even though it required at least 30 seconds for each uploading…
VS1053, the MIDI and sound kit was great to use and it had its own MIDI instrument. I also tested Adafruit Music Maker but I succeeded only mp3 playing with bunch of failures of MIDI instrument. Since it has no amplifier, I added the MAX31865 amplifier from Adafruit.
As to LED light, Adafruit’s 8×8 LED Matrix was chosen since I imagined the four-column design as below initially.
Lighting effect prototype
Using existing white mat acrylic panel, I tested the look and feel of LED light.
The effect was better than I was expecting! but if the closest LED has more connected feeling, it would be better. Also, it showed me an after-image effect by chance when it comes to a dot based sequence which are red and yellow. It’s not a big problem and even look better than nothing but I have to figure out what the exact problem is.
However, the thing was the recognition of LED arrangement was not clear to users. Although the light was the optional function followed by the sound, it still had difficult point to map the data with a matrix. My initial thought was give the data each two columns and play and visualize four sorts data set simultaneously, but as Tom said, it had technical limitation and I couldn’t help arrange each data as sequences.
In terms of the sound, frankly is showed me clear limitation as the sound kit didn’t provide playing multiple MIDI signal together. I just realized what Tom Igoe said to me ‘why you are making your life more difficult?’
Using JSON in Arduino (ESP8266)
Handling JSON data was also problem, which is that parsing Array and handling square bracket didn;t work when I was using ArduinoJSON library. So I made my own data set for the prototyping and Arduino JSON Assistant directly converted actual Arduino code with existing JSON data.
- MKR1000 + neopixel
- MKR1000 + weather API
- MKR1000 + Sound
- VC1503 with Midi : PianoGlove
- Topic: Software Serial library for MKR1000
- Interface between MKR1000 and Music Maker
- Air Quality API
- Arduino JSON
- Arduino JSON Assistant
Things I can do next
- re-produce PCB
- Speaker test
- More lighting pattern