Kenya’s Hits and Misses on Journey to Eliminating Plastic Waste

Plastic bags were a part and parcel of life in Kenya. More than 100 million plastic bags were used annually in Kenyan supermarkets alone, with at least 24 million plastic bags discarded every month. Kenya was choking under the weight of plastic bags. “Every food market, big or small, used plastic bags to wrap raw […]

UN Cybercrime Convention: Could the Cure Be Worse than the Disease?

Credit: CIVICUS

By Inés M. Pousadela
MONTEVIDEO, Uruguay, Jun 16 2023 – If you’ve never heard of the Cybercrime Convention, you’re not alone. And if you’re wondering whether an international treaty to tackle cybercrime is a good idea, you’re in good company too.

Negotiations have been underway for more than three years: the latest negotiating session was held in April, and a multi-stakeholder consultation has just concluded. A sixth session is scheduled to take place in August, with a draft text expected to be approved by February 2024, to be put to a vote at the UN General Assembly (UNGA) later next year. But civil society sees some big pitfalls ahead.

Controversial beginnings

In December 2019, the UNGA voted to start negotiating a cybercrime treaty. The resolution was sponsored by Russia and co-sponsored by several of the world’s most repressive regimes, which already had national cybercrime laws they use to stifle legitimate dissent under the pretence of combatting a variety of vaguely defined online crimes such as insulting the authorities, spreading ‘fake news’ and extremism.

Tackling cybercrime certainly requires some kind of international cooperation. But this doesn’t necessarily need a new treaty. Experts have pointed out that the real problem may be the lack of enforcement of current international agreements, particularly the 2001 Council of Europe’s Budapest Convention.

When Russia’s resolution was put to a vote, the European Union, many states and human rights organisations urged the UNGA to reject it. But once the resolution passed, they engaged with the process, trying to prevent the worst possible outcome – a treaty lacking human rights safeguards that could be used as a repressive tool.

The December 2019 resolution set up an ad hoc committee (AHC), open to the participation of all UN member states plus observers, including civil society. At its first meeting to set procedural rules in mid-2021, Brazil’s proposal that a two-thirds majority vote be needed for decision-making – when consensus can’t be achieved – was accepted, instead of the simple majority favoured by Russia. A list of stakeholders was approved, including civil society organisations (CSOs), academic institutions and private sector representatives.

Another key procedural decision was made in February 2022: intersessional consultations were to be held between negotiating sessions to solicit input from stakeholders, including human rights CSOs. These consultations have given CSOs the chance to make presentations and participate in discussions with states.

Human rights concerns

Several CSOs are trying to use the space to influence the treaty process, including as part of broader coalitions. Given what’s at stake, in advance of the first negotiating session, around 130 CSOs and experts urged the AHC to embed human rights safeguards in the treaty.

One of the challenges it that, as early as the first negotiating session, it became apparent there wasn’t a clear definition of what constitutes a cybercrime and which cybercrimes should be regulated by the treaty. There’s still no clarity.

The UN identifies two main types of cybercrimes: cyber-dependent crimes such as network intrusion and malware distribution, which can only be committed through the use of information and communications technologies (ICTs), and cyber-enabled crimes, which can be facilitated by ICTs but can be committed without them, such as drug trafficking and the illegal distribution of counterfeit goods.

Throughout the negotiation process there’s been disagreement about whether the treaty should focus on a limited set of cyber-dependent crimes, or address a variety of cyber-enabled crimes. These, human rights groups warn, include various content-related offences that could be invoked to repress freedom of expression.

These concerns have been highlighted by the Office of the UN High Commissioner for Human Rights, which has emphasised that the treaty shouldn’t include offences related to the content of online expression and should clearly and explicitly reference binding international human rights agreements to ensure it’s applied in line with universal human rights principles.

A second major disagreement concerns the scope and conditions for international cooperation. If not clearly defined, cooperation arrangements could result in violations of privacy and data protection provisions. In the absence of the principle of dual criminality – where extradition can only apply to an action that constitutes a crime in both the country making an extradition request and the one receiving it – state authorities could be made to investigate activities that aren’t crimes in their own countries. They could effectively become enforcers of repression.

Civil society has pushed for recognition of a set of principles on the application of human rights to communications surveillance. According to these, dual criminality should prevail, and where laws differ, the one with the higher level of rights protections should be applied. It must be ensured that states don’t use mutual assistance agreements and foreign cooperation requests to circumvent domestic legal restrictions.

An uncertain future

Following the third multistakeholder consultation held in November 2022, the AHC released a negotiating draft. In the fourth negotiating session in January 2023, civil society’s major concerns focused on the long and growing number of criminal offences listed in the draft, many of them content-related.

It’s unclear how the AHC intends to bridge current deep divides to produce the ‘zero draft’ it’s expected to share in the next few weeks. If it complies with the deadline by leaving contentious issues undecided, the next session, scheduled for August, may bring a shift from consensus-building to voting – unless states decide to give themselves some extra time.

As of today, the process could still conclude on time, or with a limited extension, following a forced vote on a harmful treaty that lacks consensus and therefore fails to enter into effect, or does so for a limited number of states. Or it could be repeatedly postponed and fade away. Civil society engaged in the process may well think such a development wouldn’t be so bad: better no agreement than one that gives repressive states stronger tools to stifle dissent.

Inés M. Pousadela is CIVICUS Senior Research Specialist, co-director and writer for CIVICUS Lens and co-author of the State of Civil Society Report.

 


!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?’http’:’https’;if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+’://platform.twitter.com/widgets.js’;fjs.parentNode.insertBefore(js,fjs);}}(document, ‘script’, ‘twitter-wjs’);  

The Regulation Tortoise and the AI Hare

The range of applications of artificial intelligence (AI) to education is increasing ceaselessly, although its generalization still seems far away. Despite the enormous opportunities that AI can offer to support teaching and learning, the development of applications for higher education carries numerous implications and also ethical risks. Credit: UNESCO

By Robert Whitfield
LONDON, Jun 16 2023 – Regulation of a technology typically emerges sometime after it has been used in a product or service, or, worse, the risks become apparent. This responsive approach is regrettable when real harm is already being done, as now with AI. With existential risk, the approach would risk the end of human existence.

In the past few months, generative artificial intelligence (AI) systems such as ChatGPT and GPT4 became available with no (official) regulatory control at all. This is in complete contrast to new plastic duck toys which need to meet numerous regulations and safety standards. The fact is that the AI hare has been streaking ahead whilst the regulation tortoise is moving but is way behind. This has to change – now.

What has shocked AI experts around the world has been the recent progress from GPT 3.5 to GPT 4. Within a few months, GPT’s capability progressed hugely in multiple tests, for example from performing in the American Bar exams in the 10th percentile range to reaching the 90th percentile with GPT-4.

Why does it matter, you may ask. If the rate of progress were projected forward at the same rate for the next 3, 6 or 12 months this would rapidly lead to a very powerful AI. If uncontrolled, this AI might have the power not only to do much good but also to do much harm – and with the fatal risk that it may no longer be possible to control once unleashed.

There is a wide range of aspects of AI that needs or will need regulation and control. Quite apart from the new Large Language Models (LLMs), there are many examples already today such as attention centred social media models, deep fakes, the existence of bias and the abusive use of AI controlled surveillance.

These may lead to a radical change in our relationship with work and to the obsolescence of certain jobs, including office jobs, hitherto largely immune from automation. Expert artificial influencers seeking to persuade you to buy something or think or vote in a certain way are also anticipated soon – a process that some say has already started.

Credit: NicoElNino / Shutterstock.com

Without control, the progress towards more and more intelligent AI will lead to Artificial General Intelligence (AGI – equivalent to the capability of a human in a wide range of fields) and to Superintelligence (vastly superior intelligence). The world would enter an era that would signal the decline and likely demise of humanity as we lose our position as the apex intelligence on the planet.

This very recent rate of progress has caused Yoshua Bengio and Geoffrey Hinton, so called “godfathers of AI / Deep Learning” to completely reassess their anticipated time frame for developing AGI. Recently, they have both radically brought forward their estimates and they now assess AGI being reached in 5 to 50 and 5 to 20 years respectively.

Humanity must not knowingly run the risk of extinction, meaning that humanity needs to put controls in place before Advanced AI is developed. Solutions for controlling Advanced AI have been proposed, such as Stuart Russell’s Beneficial AI, where the AI is given a goal of implementing human preferences. It would need to observe these preferences and since it would appreciate that it might not have interpreted them precisely, it would be humble and be prepared to be switched off.

The development of such a system is very challenging to realise in practice. Whether such a solution would be available in time was questionable even before the latest leap forward by the hare. Whether one will be available in time is now critical – which is why Geoffrey Hinton has recommended that 50% of all AI research spend should be on AI Safety.

Quite apart from these comprehensive but challenging solutions, there are several pragmatic ideas that have recently been proposed to reduce the risk, ranging from a limit on the access to computational power for a Large Language Model to the creation of an AI agency equivalent to the International Atomic Energy Agency in Vienna. In practice, what is needed is a combination of technical solutions such as Beneficial AI, pragmatic solutions relating to AI development and a suitable Governance Framework.

As AI systems, like many of today’s software services in computer clouds, can act across borders. Interoperability will be a key challenge and a global approach to governance is clearly needed. To have global legitimacy, such initiatives should be a part of a coordinated plan of action administered by an appropriate global body. This should be the United Nations, with the formation of a UN Framework Convention on Artificial Intelligence (UNFCAI).

The binding agreements that are currently expected to emerge within the next twelve months or so are the EU AI Act from the European Union and a Framework Convention on Artificial Intelligence from the Council of Europe. The Council of Europe’s work is focused on the impact of AI on human rights, democracy, and the rule of law. Whilst participation in Council of Europe Treaties is much wider than the European Union with other countries being welcomed as signatories, it is not truly global in scope.

The key advantage of the UN is that it would seek to include all countries, including Russia and China, which have different value sets from the west. China has one of the two strongest AI sectors in the world. Many consider that a UN regime will ultimately be required – but that term “ultimately” has been completely turned upside down by recent events. The possibility of AGI emerging in 5-years’ time suggests that a regime should be fully functioning by then. A more nimble institutional home could be found in the G7, but this would lack global legitimacy, inclusivity and the input of civil society.

Some people are concerned that by engaging with China, Russia and other authoritarian countries in a constructive manner, you are thereby validating their approach to human rights and democracy. It is clear that there are major differences in policy on such issues, but effective governance of something as serious as Artificial Intelligence should not be jeopardised by such concerns.

In recent years the UN has made limited progress on AI. Back in 2020, the Secretary General called for the establishment of a multistakeholder advisory body on global artificial intelligence cooperation. He is still proposing a similar advisory board three years on. This delay is highly regrettable and needs to be remedied urgently. It is particularly heartening therefore to witness the Secretary General’s robust recent proposals in the past few days regarding AI governance including an Accord on the global governance of AI.

The EU commissioner Margrethe Vestager has called for a three-step process, namely national, then like-minded states and then the UN. The question is whether there is sufficient time for all three. The recent endorsement by the UN Secretary General of the proposed UK initiative to hold a Summit on AI Safety in the UK this autumn is a positive development

The Internet Governance Forum (IGF) was established in 2005 and serves to bring people together from various stakeholder groups as equals, to discuss issues relating to the Internet. In the case of AI, policy making could benefit from such a forum, a Multistakeholder AI Governance Forum (AIGF).

This would provide an initial forum within which stakeholders from around the world could exchange views in relation to the principles to be pursued, the aspects of AI requiring urgent AI Global Governance and ways to resolve each issue. Critically, what is needed is a clear Roadmap to the Global Governance of AI with a firm timeline.

An AIGF could underpin the work of the new high-level advisory body for AI and both would be tasked with the development of the roadmap, leading to the establishment of a UN Framework Convention on AI.

In recent months the AI hare has shown its ability to go a long way in a short period of time. The regulation tortoise has left the starting line but has a lot to catch up. The length of the race has just been shortened so the recent sprint by the hare is of serious concern. In the Aesop’s Fable, the tortoise ultimately wins the race because the over-confident hare has taken a roadside siesta. Humanity should not assume that AI is going to do likewise.

A concerted effort is needed to complete the EU AI Act and the Council of Europe’s Framework Convention on AI. Meanwhile at the UN, stakeholders need to be brought together urgently to share their views and work with states to establish an effective, timely and global AI governance structure.

The UN Accord on the governance of AI needs to be articulated and the prospect of effective and timely global governance ushering in an era of AI Safety needs to be given the highest global priority. The proposed summit on AI Safety in the UK this autumn should provide the first checkpoint.

Robert Whitfield is Chair of the One World Trust and Chair of the World Federalist Movement / Institute for Government Policy’s Transnational Working Group on AI.

IPS UN Bureau

 


!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?’http’:’https’;if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+’://platform.twitter.com/widgets.js’;fjs.parentNode.insertBefore(js,fjs);}}(document, ‘script’, ‘twitter-wjs’);