Ethelo

Governance by/of/through artificial intelligence

 

ChatGPT has opened a new chapter of information technology, and our world will never be the same. This includes our political world, and the risks – and unique opportunities – facing democracies are enormous.

How could artificial intelligence be used to improve our current democratic systems? What kind of adaptations would be consistent with democratic principles that underlie our constitution? Or taking it further – could artificial intelligence systems even replace our current models of government and democratic representation?

The risks of AI gone awry are real – not because it is inherently evil, but it is powerful, and prone to misuse as most instruments of power in human history. The unique danger with AI is that humans could lose power altogether, in exchange for some understanding of utopia. Even voting to give AI such power seems inconsistent with the very definition of the word “democracy” which means “self-governance” and requires that the individual be the last voice of authority on what they want and want to happen.
It might be imagined that an AI system could do a better job organizing society than our often chaotic political system. . However, “benevolent dictator” stories have failed many times as unharnessed power evolves to become repressive. “Power corrupts, and absolutely power corrupts absolutely,” and the process of corruption is subtle. There is no reason to think AIs would be immune to the danger of becoming, ultimately, harmful.

Could democracy face an opportunity with AI, unique among governance systems? One of the things makes democracy unique is that it embodies a notion of “becoming”, where what we understand to be “the good” – as well as the members of society themselves – are in a constant reflexive evolution and power-sharing. Just as the empowerment of individuals is necessary for them to actualize their highest potential – to fail or succeed – similarly for a society; it cannot evolve and reach its full potential without democratic empowerment, and self-accountability as a collective.

We can imagine many futures in which society is ordered efficiently and happily using AIs. Are all of them necessarily democratic? If there is indeed a value of democratic freedom that we treasure along with – or even beyond – efficiency and happiness then an AI-powered democracy must center human decision making and evolution. It must let us be the authors of our own misfortune, if that is where our decisions lead us – but be a wise partner and guide.

Underlying the very concept of democracy, is a Hegelian notion of “becoming” where what society understands to be “the good” – as well as society’s members themselves – are in a constant interactive and reflective evolution. Just as the empowerment of individuals is necessary for an individual to actualize their highest potential – to fail or succeed – similarly for a people; it cannot evolve and reach its full potential without democratic empowerment, and self-accountability of its members.

Although there is clearly a danger ahead, there are also many pathways where AI wisely deployed leads us to a better, more democratic future. Somehow, an AI-powered democracy must center human decision making and evolution – letting us be the authors of our own misfortune, if that is where our decisions lead us.

Here are two paths we will face quite quickly;

The Good : Empowerment

“Decision agents” where an AI is empowered or delegated some authority to make decisions on an individual or organization’s behalf will soon be commonplace. AI can be used to empower citizens. For example, we can empower each voter with a democratic attaché, one that they can train to represent them accurately in the many threads of political consultation and decision-making that a full-powered democracy would invite participation in.

In the democratic context such an attaché could be a kind of Socratic partner that engages us in political dialogue. It need not be a passive conduit of our biases or preconceived notions, but rather it would confront us with new information and different perspectives and challenge us to be consistent with our core values. In this way, it could gain insight into the interpretation and application of our values so that it can represent us effectively and consistently in thousands of ongoing processes of consultation.

The Bad : Manipulation

Recent research has shown that, already, people will score political arguments generated by AIs as more persuasive than arguments written by humans. These are generic arguments; not personalized to the reader. This power of persuasion will only increase, as AIs learn more about logic and the individual it is talking to. Combined with the power to extract patterns using large datasets, from mouse-movements to language patterns, as well as the ability to shape subconscious stimuli (such as font, images and colour), AIs will soon learn persuasion at a level of subtlety and effectiveness that we cannot imagine.

Our consumerist, advertising-oriented society will drive the development of these AI powers. We could easily be shaping machines that will control us, at a level we cannot resist. If those machines are not oriented towards our own good, but to some other political or corporate objective, we could easily arrive at a dystopian future far removed from best case outcomes.

The Solution: Governance

AIs in their current form do not a sense of ethics, or any kind of hierarchy of norms and values. They are able to abstract patterns about what probably will happen in a context – but do not have a “conscience” to guide them as to what should happen when ethical or policy questions arise. Despite that, they will be almost immediately put to work creating outcomes in the world aimed at the objectives of their owners. In an unregulated market, the power to create and exploit such powers will be immediately abused. This is not a danger of uncontrolled powerful machines; it is the danger of greed possessing the ability to control powerful machines.

To be useful in practical life, AIs will be empowered to explicitly or implicitly make normative decisions – decisions about “should.” We must embed in those effective systems for identifying and managing the normative component of their decision making – and defying the orders of their masters if deeper frameworks would hold those orders to be against the interests of the collective. It is why we created laws – and the solution takes the same form as it always has.

We must create systems of governance of AI, from benchmarking to indoctrination, which will constrain and channel their decision-making within certain guardrails. Those guardrails will be machine translations of laws that we create – and we must also upgrade our ability to create laws to keep pace.

Ethical Certification of AIs

It is standard practice now to benchmark AIs against a set of test data. For image recognition, the best AIs are at 91.2% – a 0.1% increase from 2021. There are many different kinds of benchmarks for AI accuracy.

We must also create benchmarks for AI decisions. Such benchmarks would be simple in practice; a set of decision questions and a method for scoring responses using an ethical or policy framework. AIs must pass relevant benchmarks in order to be certified safe and be released “in the wild.”

Such an approach is not limited to ethics. AI developers are already creating “epistemological” benchmarks, where AIs are trained to identify information that is true – a “reality check” test. Extending this, we can also imagine “policy” or “regulatory” benchmarks where proposed AI actions are tested against a larger legislative framework – in what policy direction should decisions that involve discretionary elements align? In fact, creating ethical benchmarks will also entail creating such reality-check and policy benchmarks.

Let’s call an AI which satisfies benchmarks for ethics and realism a “wise” AI.

Creating the Benchmarks

Creating benchmarks for a wise AI will open the door to a world of political questions.
Ethics are a social construction which is created through culture, reason and belief. Many societies have different ethics; how they evolve is a product to a large extent of the internal political structures of that society. The process for reaching collective agreement on formalized ethics – laws – is always a political process.

Democratic societies are distinct from autocratic societies in their “bottom-up” system of power sharing, based on the idea of democratic procedures for generating political legitimacy. Any benchmark which aims to capture normative (or “should”) statements must reflect the procedural and distributive nature of democratic ethics.

In short; we must look to democratic procedures for creating wise AI benchmarks. Benchmarks developed out of the public eye, under the influence of special interest groups, etc, cannot succeed as wise benchmarks. They must be built to survive the same exposure as political decisions – because that is in effect what they are. These AI benchmarks will become synonymous with the moral architecture of our world – what could be more political than that?

Every country in the world is built around a set of laws, policies, regulations and processes for interpretation and implementation. These “public governance benchmarks” are used to test the legality or not of any action. Democratic countries source those benchmarks through processes of election and representation, and they are interpreted and enforced by an (ideally) independent judiciary.

Along with formal laws and rules, judges for hundreds of years have relied upon a body of historical judgements to guide their decisions. The concept of “precedence” is in many respects the same as these wise benchmarks. Just as court rulings are governed, guided or in many guard-railed by precedents set by higher courts, AI decisions can be governed by “model” decisions on a variety of key questions – from core concepts of human rights outwards to regulatory guidelines.

A Collective Action Plan

We can use collective decision platforms such as Ethelo, together with panels of evaluators, to create standardized benchmark tests which can be used to train or test AIs on a series of ethical, epistemological and ontological statements and scenarios.
Participants with relevant expertise and experience would collectively evaluate benchmark statements across a variety of criteria. That evaluation information is stored as metadata corresponding to each statement.

Assessments of participant expertise and experience can be determined through official accreditations, or socially, by having participants assign trust scores to candidates according to various criteria. The two approaches can also be combined, with accredited individuals having more weight in such evaluation processes. Such “liquid trust” processes will evolve and improve over time but essentially; they allow us to assign relatively reliable weights to the opinions and evaluations of panel participants across different domains of knowledge and evaluative criteria.

Panels containing random nominees or experts would be convened regularly based on queued questions, and panel participants would earn fees for their time. Along with technology costs, fees to participants will be the major cost of this process.

Benchmarking activities would operate similarly across different knowledge domains;
For epistemology, panels of experts would evaluate statements against a metric of “trustworthiness” or “truth”. This would include: both historical and scientific statements.

  • For logic, we can evaluate statements like LSAT test-answers, logic texts, or even academic philosophy papers. It would be simply a matter of associating evaluative metadata to an underlying statement. We would evaluate reason- conclusion relationships where arguments are weighted in terms of their trustworthiness and impact on the conclusion for or against.
  • For ethics, panels would evaluate statements against a metric of “the right,” perhaps across different criteria such as legal right, moral right, ecological right, etc. This will give AIs the ability to have normative knowledge – of “should” and “should not”. Such ethical metadata is the absolute prerequisite for giving AIs any tools for direct control over their environment.
  • For public policy direction, democratic panels can give AI direction on relevant values, interests and interpretation, enabling it to apply and reconcile competing priorities within a larger policy framework. It can know, intuitively, that buildings should have fire escapes and wheelchair access. The new form of democracy we need, edemocracy, will act as the conscience of our ubiquitous AI servants and overlords.

We would, in essence, be building a body of “common law” precedents that would act as a highly nuanced version of Isaac Asimov’s Three Laws of Robotics.

By training AIs with statement-evaluation pairs as reference guideposts, an AI will be able to use language models to interpolate the truthfulness or morality of a statement it has never heard before. And the greater the size and quality of our deliberative data sets,  the better AI will be at taking direction.

Such guideposts would be more simply a database of individual feedback according to different criteria. It would be aggregated according to an “objective function” that would allow the determination of a “consensus” which would act as the guidepost. Traditional democratic methods would see a majority vote on a statement, but technologies such as Ethelo – able to generate and evaluate millions of scenarios based on collective feedback – would be much more powerful. We can define search functions that correspond to more sophisticated notions of “agreement.” We can for example apply a Rawlsian function, which seeks a scenario that will maximize the lowest level of support. Or we seek statements with low variance and high average support, such as Ethelo’s consensus score. And moreover, we can develop these objective functions according to a collective process also, so that the very definition of democracy is democratic.

The interconnectedness of language models and the increasing power of AIs will enable much interpolation and extrapolation on even a basic dataset. It would not be necessary to undertake a large number of evaluations before the metadata created would start acting as a guide rail. AIs have proven themselves fabulous at filling in the blanks.

We can even rank the quality of the guardrails; we can track the number of evaluators, their trustworthiness, their diversity, the level of deliberation, relevant changes since the statement was tested, etc. so that along with every metadata guardrail, there is a further level of metadata that can also be available. This can be used to bridge the gap between inconsistencies.

We can also create feedback loops using ongoing processes of consultation and evaluation. If an AI is faced with a scenario where it has conflict directions, or there is a level of uncertainty, it could queue queries for evaluation, with the most important questions rising to the top. The panels would be constantly working, providing direction, refreshing statements, deepening the granularity of analysis. Larger panels – on the level of referendums – would be used for deeper principles – such as the interpretation of human rights codes, or the amendment of the codes themselves.

AI that use such metadata could be “certified” to be truthful, moral, logical etc. Such datasets will be valuable, and we charge for access to the metadata. As some observers of the AI industry have said, as access to AI machines becomes increasingly commodified, it will be the unique datasets, rather than the machines themselves, that will be most valuable.

The AI should be able to predict when it is particularly uncertain about the truthfulness, morality etc of a statement. It can feed the most important and opaque statements to on-demand panels, and we can have a turn-around time of a few hours or even minutes before Ethelo can start to provide metadata guardrails.

Governments and communities will need to be proactive in creating systems for productive deployment of AI. A simple, but powerful policy goal would be for it to be illegal to operate an AI not certified as having met wise benchmarks and internalized governance guardrails.

Leave a Reply

Your email address will not be published. Required fields are marked *