With the COVID-19 pandemic and ongoing social distancing measures, everyone is spending a great deal of time online, let alone kids. But because kids are more vulnerable and prone to cyber threats in the virtual space including phishing, bullying, harassment, theft, etc., it is crucial to supervise their online engagement and ensure their safety with due respect to their privacy. Fran Killoway, a well-renowned Inventor and the Founder and Executive Chairman of ‘meka’, an intuitive application that works as a digital buddy to safeguard its user from threats in the digital space using artificial intelligence, was frazzled to see that modern technology was unable to protect our kids online in near real-time while respecting their privacy. She exclaims, “Kids are our future and we find today’s bad behavior online abhorrent, and it needs to stop”.
Fran has been contributing to the creation, development, and delivery of technology for over 20 years now. Her prior establishment Merlin has been a huge learning curve for her that equipped her with the necessary tools and rare skill to begin to successfully launch meka in the United States. It has proved several who believed such a technology is not possible wrong. meka is working as the first intelligent computer-generated companion, tutor, and friend for flagging abusive behavior on social media, the internet, and the world. Fran explains, “meka has a very sophisticated permission-based system that utilizes cutting edge Neural Network technology, Artificial Intelligence, and Machine Learning. It benefits the users by allowing independent and safe internet surfing, social inclusion in a safe environment, access applications without fear, and ensured safety and protection from bad behavior on social media and the internet. Empowered by Machine Empathy technology, it learns from each user, to understand their individual needs and preferences”. Initially monitoring the six most popular social media sites, and the six worst online sites, it sits in each user’s device keeping an eye on 28 potential threat categories including abuse in relationships, abuse in neglect, physical abuse, sexual abuse/sexual harassment, nasty messages, humiliation, rumours/gossip, fake identity, domestic violence, self-harm, stalking, bullying, hate speech, issues of climate change, voyeurism and several others. As a cyber-safety application that uses artificial intelligence, it provides a 24/7 notification system of potential threats and abuse in real-time, providing the user the privacy and security they need online. Fran emphasizes that this technology will further reduce mental healthcare costs and ensure the user is economically active by encouraging those who are currently afraid to use a computer to feel safe and empowered in their world.