The Ethical Implications of New Technology - The Ethical Implications of New Technology - IoT, AI & ML
As applications of artificial intelligence (AI) have proliferated into society, debates around the ethics of data and communication have become almost synonymous with it. One of the developments that has raised the biggest number of questions is the Internet of Things (IoT) and Artificial Intelligence (AI) technology.
There is without a doubt a lot of hype around IoT, AI, Machine Learning (ML) and others. The applications are endless as physical things can communicate with other physical things wirelessly, bringing us a fast, contextual service. Take the likes of the Amazon Echo device, that can answer all our questions and operate other technological items in the home. The vast skills of the technology has continued to grow, and it is becoming an integral part of everyday life. According to Business Insider, more than 75 billion objects will be connected to the IoT by 2020.
A poll by opensource.com 4 years ago showed most respondents didn’t own any IoT devices but now, they own an average of at least 3 each.
This large-scale technology expansion has led to several ethical issues. IoT and Advanced Data Collection makes it easier to track and predict user behaviors, with the AI systems simplifying the task of what end users require to gain engagement. With growth and technological advances, there are far more places to drive usage across many devices. The merge of IoT and AI in this way creates fear from consumers about the privacy and security of their personal data, and also the accuracy of the predictions.
Ethical Issues with IoT
The main dilemma being faced by companies is that whilst consumers use their IoT products, there is a lack of knowledge and understanding as to how they work. As well as that, the general consensus around IoT is that security is worryingly underdeveloped and incredibly prone to cyber-attacks. A Forbes report has suggested that in 2019, we’ve had a 300% in cyber-attacks via IoT devices. This is being put down to the exponentially growing number of devices, coupled with a lack of focus being put on security as opposed to innovation.
IoT devices being released to the market have a “fire and forget” type of feel. Companies are releasing them to the market but failing to put relevant updates in place to ensure they remain safe.
The Mirai bot attack of 2017 trawled the internet for unsecured IoT devices like cameras and routers, which still had factory default passwords and took them over. The malware then made repeat requests to try and totally overwhelm US websites, attempting to take them down. Having the devices available puts others at risk if they are not secure and if people using the devices do not have the knowledge on how to use them this creates an increased risk.
A further example from fitness app Strava was revealing sensitive military information. It was possible to see the jogging routes used by soldiers around military bases. A network of insecure devices actually left people at risk of physical harm especially if it got into the wrong hands.
A big reason why IT security is not a prevailing feature in IoT is cost. As security features get more complex, devices cost more to make which doesn’t work for businesses in a competitive, and in most cases start-up, environment. There is an ethical call here for governments to do more with the existing minimum regulation requirements. The challenge here comes in having regulations that do not prohibit innovation and finding the right balance of innovation and security implications.
IoT devices collect vast amounts of user data. The objective is to provide targeted services for the users using marketing methods, creating both a business drive and consumer benefit. A well-publicized case from the retailer Target shows how this can quickly go wrong. After mining the purchasing habits of a customer, Target started to send them pregnancy related communications. In turned out that the customer was in fact a teenager who was trying to keep her pregnancy private.
The ethical issues coming out of this were two-fold. To get the data on her behavior, Target must have been using IoT devices to monitor credit card activity and see the products she was buying. Whilst the law in most countries now states that consumers must have the opportunity to opt-out of such data collection, something like this is probably not instantly apparent to the everyday shopper. Also, with the increase of loyalty related schemes/cards, you are automatically opting-in for your data to be collated in return for what you consider to be benefits. Advance IoT technology then allows for you to be sent personalized offers based on your purchasing history.
Secondly, Target decided to take it upon themselves to send personalized information on a major life event that the customer had not informed them about. It should not be their right to then use that as a targeting method. In fact, this case was also related to a teenager, these are young vulnerable people targeted by large organizations without their consent – raising major privacy concerns and other concerns around the impact on mental health.
IoT can easily cross ethical guidelines. If the Amazon Echo knows when we say “Alexa”, surely it knows when we say other things. Even if seemingly innocuous information like when we make a coffee is leaked, it could still give information that is personal e.g. a hacker knows we make coffee at a certain time every day and starts tracking if this changes.
Unintentionally, IoT could profile data that impacts privacy and puts the technology into an ethical dilemma.
The Tech Giants
Amnesty International have claimed that the data collected by tech giants such as Google and Facebook that help to serve us targeted advertising, is an invasion of human rights. They say that you are almost forced to give data to these companies because they have become so synonymous with everything we do. Could people search the internet without Google? The answer is no, but every time we talk to the search engine, it helps them to know more and more about us.
The Amnesty International report makes an extensive and strong case. They say we need to challenge the idea that such intrusive data collection is necessary.
The Future of New Technology and Ethics
In November 2019, Australia released a code of ethics to govern the use of IoT devices. The code consists of 13 principles which include:
- No duplication of weak passwords
- Implementation of a vulnerability disclosure
- Service developers and app providers must have a public point of contact
- Software and firmware must be kept updated
The ethical principles align and build upon guidance from other countries such as the UK who have also been firmly laying down the law. The European Union has also developed a code of ethics. California has passed an IoT security law that came into play as of January 2020.
There is a recognition that IoT will be the driving force behind innovation, but we must focus on security and privacy now. This is the only way to gain enough trust from consumers and investors.
Crayon, a partner for your IoT & AI journey
At Crayon, AI is part of our core business. We offer advice and guidance from world-class experts who have been part of AI projects across multiple industries and have hands on experience in implementing AI technologies to address real-life business problems.
Crayon has invested significantly in taking its AI practice from delivering proof-of-concepts to developing and implementing real-life applications, and we are proud to have been recognized by Microsoft as the global AI and Machine Learning Partner of the Year. Crayon continues to expand its influence in emerging technologies by establishing several AI centers of excellence hubs in various geographical locations.
Click here to read more about our award-winning AI capabilities.