Micro-service driven and docker provisioned, that’s how Matilda rolls

After debuting the GitHub profile and releasing the first two open source projects that compose Matilda, I started to have a better view of how I want Matilda to be architected, and for the first few functionalities I came up with the following horrid drawing/architecture:

Each of the squares represent a micro-service that will be available on Matilda’s GitHub, circles represent third party services and or technologies that the micro-service beside it will connect to and triangles represent physical hardware that the micro-service beside it will require to work.

Eye in the sky

A (really) simple weather forecast API written in Flask, that leverages Open Weather data to provide weather insights. As mentioned in previous posts, the first meaningful functionality Matilda will have, is the ability to talk about the weather. This app will run in the cloud, view on GitHub.

Cochlea

The cochlea /ˈkɒk.liə/ (Ancient Greek: κοχλίας, kōhlias, meaning spiral or snail shell) is the auditory portion of the inner ear. Matilda’s cochlea is a Python app that is constantly listening to the environment looking for commands. This app will require a microphone to act as Matilda’s ear, and will need to run in loco for obvious reasons, view on GitHub.

Auditory Cortex

The primary auditory cortex is the part of the temporal lobe that processes auditory information in humans, other vertebrates and in Matilda.

In boring terms, this is a minimalist Python app that handles speech-to-text. This app will run in the cloud, view on GitHub.

Cerebrum

The Cerebrum performs higher functions like interpreting touch, vision and hearing, as well as speech, reasoning, emotions, learning, and fine control of movement. It’s also the largest part of the brain and is composed of right and left hemispheres. And so, this is a Python app that receives commands and connects Amazon’s Lex to process them and hopefully give an appropriate response.

Cochlea captures my voice, the auditory cortex processes it and transforms in texts, which is what Lex understands, sends it to Cerebrum who sends to Lex to process and return a response, all these are connected/managed by corpus callosum. This app will run in the cloud, view on GitHub.

Corpus Callosum

The corpus callosum (/ˈkɔːrpəs kəˈloʊsəm/; Latin for “tough body”), also known as the callosal commissure, is a wide, flat bundle of neural fibers about 10 cm long beneath the cortex in the eutherian brain at the longitudinal fissure. It connects the left and right cerebral hemispheres and facilitates interhemispheric communication. It is the largest white matter structure in the brain, consisting of 200–250 million contralateral axonal projections.

In other words, this is the application responsible for interconnecting the different micro-services that compose Matilda. After receiving a response from Lex, through Cerebrum, Matilda can then respond by using her voice, which is handled by the Broca’s area. This app will run in loco on the Orange Pi, view on GitHub.

Broca’s Area

Broca’s area or the Broca area /broʊˈkɑː/ or /ˈbroʊkə/ is a region in the frontal lobe of the dominant hemisphere (usually the left) of the hominid brain with functions linked to speech production. In other words, this is what allows Matilda to talk.

Or in a more generic and boring definition, this is a Python micro-service capable of receiving text and turn into speech by using AWS Polly, the artefact generated is a json response with base64 audio embeded. This app will run in the cloud, view on GitHub.

At the moment all of these micro-services are fully functional, except for Cerebrum and Corpus Callosum, I’ll be working on these next. Also very important to mention that every single micro-service is somewhat well documented and contain a Dockerfile which allows for anyone to leverage every single bit of Matilda on their own personal projects.

Add a Comment

Your email address will not be published. Required fields are marked *