Regulation of Artificial Intelligence: Taming the Big Elephant

image_pdfimage_print

Enough has been said repeatedly of the inevitability of technology, particularly, Artificial Intelligence (AI), disruptively usurping long established routine processes and digitally revolutionizing the means of getting things done. The revolution AI brings and its impacts are most likely going to be felt more in how formal businesses are conducted and even services are offered.

 

Artificial Intelligence and Machine Learning in Perspective

It appears, therefore, that there is an undisputed consensus that it is no longer a question of if, but when and what time business owners and service providers will rely heavily on AI in their day-to-day operations.

 

From Jack Ma (Alibaba)’s investment in AI and Machine Learning (ML) to disruptive technologies in even healthcare services such AI-assisted robotic surgery, Virtual nursing assistants, AI assisting in clinical judgment or diagnosis, voice-to-text transcriptions which could help order tests, prescribe medications and write chart notes, as well as machine-learning algorithm that can analyze 3D scans up to 1,000 times faster than what is currently obtainable, it will be attempting to state the obvious to assert that AI and  ML are here with us.

 

Need to strike balance between Regulation and Innovation

The foregoing in mind, what hasn’t been quite fancifully pronounced as the inevitability of the AI surge, but is definitely of equal inevitable proportions, is the impending regulation which is likely to be aimed at properly defining the limit(s), if any, of AI or ML usages, its application and coverage, when it fully arrives, particularly, in developing countries.

 

The fact is, there is a global business race, at the moment, for the development of cutting-edge AI software and ML systems While this move technologically represents a positive turning point on a global scale of doing business, it equally poses grave concerns on the gap between the capability of the new innovations and the need for standard quality/performance requirements and performance evaluation. Hence, there is a need to strike a balance between innovation and regulation as it is important that regulation should not be seen as stifling growth, creativity and ingenuity.

 

Step in the Right Direction?

In a step that corroborates the foregoing, the European Group on Ethics (EGE), symbolically echoing the generally-shared doubtful hesitations of governments around the world and regulatory bodies globally, on 19 December 2018, put forward what appears to be a policy-hinting statement, noting that by its own estimation (and judgment), technology companies worldwide are in a frenzied race to develop AI systems, but it (the EGE) has core concerns on the ethical and societal considerations of the innovations.

 

Towards Effective Regulation of Artificial Intelligence: Finding the Balance

No doubt, the usage of autonomous technologies – Internet of Things Software (IoT), AI core technology/Robotics and ML Systems can be beneficial for companies, government and citizens, alike and non-alike. The benefits AI offers are best understood when seen in the perspective of ease of doing job and convenience, better analysis/application of big Data, machine simplification of complex tasks, among others.

 

The foregoing notwithstanding and like every new innovation, there is undeniably a clear need for each proactive and forward-thinking government in different parts of the world to kickstart the process of enacting a truly general, internationally acceptable ethical and legal framework, that will aid and enhance better performance of the entire process of design, development, production, application and governance of these autonomous systems. Our world is now, closer than ever, susceptible to changing times with respect to the future of work, disruptive innovation and impacts of the emerging technology on the basic levels of human control and knowledge standards.

 

The days are now gone when AI systems were products of linear instructions of humans; they are now generations of endless codes of software that outsmart the ordinary limits of any known human capacity. Instances such as the claim of Google that its Google Brain develops AI better and faster than humans ever will as well as AlphaZero reaching unbelievable software heights of bootstrapping from zero Chess knowledge to reaching world champion stage in four hours, make one likely to come to the conclusion that these innovations are less and less open to scrutiny by humans. Relatedly and arguably questionable is the unfettered access to virtually every available data, despite ethical and human rights concerns. Perhaps, this is best attributable to the fact that the data and initial algorithms that birth AI are often developed through learning processes and may no longer be readily available or accessible.

 

A careful reflection on the foregoing is likely to make one come to the decision that there is a need for a proper, effective and robust regulation that addresses ethical principles, focuses on respects for human rights and sustainable development. This must, of essence, include-the recognition and continued respect of the right to human dignity which should not be violated by the emerging ‘autonomous technologies’;  the regulation of the development and usage of such innovations only in ways that serve the overall social, environmental and  societal good; contributions and advancements that benefit justice, equality and solidarity in AI, Machine Learning and Robotics; accountability, and observance of global human rights standards in AI regulations; Data Protection and  compliance with established standards of privacy and data usage, amongst others.

 

Concluding Remarks

While the recent train moving towards leveraging AI or ML by business owners and service providers, including but not limited to areas such as security services (cybersecurity defense inclusive); network infrastructure; insurance; legal services; health care; building intelligent conversational chatbots, making market predictions, and voice skills, can neither be stopped nor derailed, it becomes important to provide an effective and robust regulation that strikes a good balance between public interest and innovation on one hand; as well as desired for renewed creativity and protection of existing individual human rights on the other.

 

It is true that, as they say, he who tries to stop the wind will eventually be pulled away with it, but while we cannot stop the impending wind of AI technology, we can at least erect wind breakers (legal regulations) and wind turbines (ethical considerations) in both positively maximizing its potentials and proactively preventing, or at least limit, its negative possibilities.

image_pdfimage_print
Share to Your Friends

Leave a Reply

Your email address will not be published. Required fields are marked *