Large IT businesses are pushing the boundaries in quest of cutting-edge technology, transforming themselves into digital sovereigns with a global presence and establishing new game rules.

The AI (Artificial Intelligence) race is growing increasingly fascinating currently with the two main players, Alphabet, Google’s parent firm, and Microsoft, dueling for pole position. Google revealed features for Google Documents on Tuesday, March 14, 2023, that can write blogs, create training calendars, and text. It also revealed a Google Workspace overhaul that can summarize Gmail conversations, generate presentations, and capture meeting notes. “This next step is where we’re bringing human people to be backed by an AI collaborator who is working in real-time,” said Thomas Kurian, Chief Executive Officer of Google Cloud, during a press conference.

Microsoft introduced its latest AI tool, Microsoft 365 Copilot, on Thursday, March 16, 2023. Copilot will harness the potential of LLMs (Large Language Models) in conjunction with corporate data and Microsoft 365 apps. Satya Nadela, the CEO, says “We believe this next generation of AI will trigger a fresh wave of productivity development”. This is in addition to the ongoing chatbot war between Microsoft-funded OpenAI’s ChatGPT and Google’s Bard.

While these corporations and many others invest billions in research and development of solutions based on technology that they claim will help businesses and people enhance productivity, the social impact of this technology is being scrutinized. While it is widely acknowledged that AI technology will have a significant impact on our society, it is also true that not all of it will be beneficial.

Although the fact that AI may considerably enhance efficiency and help human beings by complementing the work they do and by taking over risky professions, making the workplace safer, it will also have economic, legal, and regulatory ramifications that we need to be ready for. We will need to create structures to ensure that it does not violate any legal or ethical bounds.

The critics expect widespread unemployment and the loss of millions of jobs, resulting in societal instability. They are also concerned that the algorithms would be biased, resulting in needless profiling of people. Another issue that will have an impact on daily life is the capacity of technology to manufacture false news, misinformation, or inappropriate/misleading content. The issue is that people will believe a computer because they believe it is infallible. The usage of deep fakes is not a technological challenge in isolation. It reflects the cultural and behavioral tendencies that are now being shown on social media.

Intellectual Property Issue

There is also the issue of who owns the intellectual property for AI developments. Is it patentable? In the United States and the European Union, there exist rules governing what may and cannot be considered patentable innovations. The question of what defines an original invention is now being debated. Can new artifacts created from existing ones be considered inventions? There is no agreement on this, and authorities in various countries have rendered diametrically opposed decisions, as evidenced by patents filed by Stephen Thaler for his DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) system, which were rejected in the UK, the EU, and the US but granted in Australia and South Africa. One thing is certain: due to the intricacies inherent in AI, the existing IP protection that regulates software will be inadequate, and new frameworks will need to grow and mature in the near future.

Environmental Impact

AI machines’ infrastructure consumes a large amount of energy. Training a single LLM is projected to create 300,000 kilos of CO2. This casts doubt on its long-term viability and prompts the question, what is AI’s environmental footprint?

Alexandre Lacoste, a Research Scientist at ServiceNow Research, and his colleagues created an emissions calculator to assess the amount of energy used to train machine learning models.

The Ethics of AI

Another unintended consequence of pervasive AI systems will be ethical in character. “AI provides three significant areas of ethical concern for society: privacy and surveillance, prejudice and discrimination, and possibly the deepest, most challenging philosophical dilemma of the moment, the role of human judgment,” writes American political philosopher Michael Sandel.

There is currently no regulatory system in place for large technology enterprises. Corporate leaders “can’t have it both ways, denying responsibility for AI’s detrimental implications while simultaneously resisting government control,” says Sandel and adds that “we can’t expect that market forces by themselves can sort it out”.

There is a discussion of regulating tools to control the fallout, but there is no agreement on how to proceed. The European Union has made an attempt with the AI Act. The regulation categorizes AI applications into three risk categories. Secondly, programs and systems that pose an unacceptable danger, such as Chinese government-run social scoring, are prohibited. Second, high-risk apps, such as a CV-scanning tool that evaluates job candidates, must follow strict regulatory guidelines. Finally, applications that are not specifically prohibited or labeled as high-risk are mostly uncontrolled.

It suggests safeguards for AI applications that have the potential to harm humans, such as systems for grading tests, recruiting, or supporting judges in decision-making. The Bill seeks to limit the use of AI for calculating people’s reputation-based trustworthiness and the use of face recognition in public settings by law enforcement agencies. The Act is a solid start, but it will encounter problems before it becomes a final text, and much more before it becomes law. IT businesses are already apprehensive of it, concerned that it could cause problems for them. Nonetheless, this Act has piqued the curiosity of many nations, with the United Kingdom’s AI plan embracing ethical AI development and the United States debating whether to regulate AI technology and real-time face recognition at the federal level.

Large big businesses are pushing the boundaries in quest of cutting-edge technology, transforming themselves into digital sovereigns with a global presence and inventing new game rules. Although governments will do their part, firms may contribute by developing a code of ethics for AI research and hiring ethicists to help them think through, create, and update the code of ethics on a regular basis. They can also serve as watchdogs, ensuring that the code is followed and pointing out deviations from it.

There will be social and cultural issues driving different countries’ responses to AI regulation, and in this scenario, Poppy Gustafsson, CEO of AI cybersecurity company Darktracesuggestion,’s of forming a “tech NATO” to combat and contain increasing cybersecurity dangers appears to be the way forward.

    Leave a Reply