Философские пристрастия искусственного интеллекта

Ты когда-нибудь задумывался, как этика влияет на разработку ИИ? Этот семинар откроет тебе глаза на то, как философские вопросы могут изменить подход к современным технологиям.

4 июля в СПбГУ состоится семинар серии «Лаборатория цифровой философии» Менделеевская линия, д.5 аудитория 8

Что будем обсуждать:

Отдельная тема для дискуссии:

Сравним обучение ИИ с платоновским воспоминанием и учением Аристотеля о четырех причинах. Ты удивишься, насколько эти древние идеи могут быть актуальны сегодня!

Записаться на семинар

Для доступа в здание института требуется регистрация с указанием данных паспорта. При посещении необходимо иметь паспорт. Учащиеся и сотрудники СПбГУ могут пройти в здание по своему удостоверению или студенческому билету

Спикеры

Перов Вадим Юрьевич

заведующий кафедрой, доцент, к.филос.н.
доцент Кафедры этики

Член Учёного совета СПбГУ, член Постоянной комиссии УС по экономике, финансам и социальному развитию.
Соруководитель программы "Инновационная деятельность в социально-гуманитарном образовании" (СПбГУ)
Участник семинара "Формирование информационных ресурсов для духовно-нравственного развития личности в информационном обществе"
Член Экспертной комиссии по этике социальной рекламы и социальнозначимой информации по Северо-западному федеральному округу.
Участник Общественных слушаний Движения «МЫ – ПЕТЕРБУРЖЦЫ»
Женат, две дочери.

Ларионов Игорь Юрьевич

доцент, к.филос.н., заведующий Кафедрой философской антропологии
Эл.почта: i.larionov@spbu.ru

Модератор: Ярочкин Дмитрий Андреевич

Магистр этики
2022 – ... Координатор проекта “Лаборатория цифровой философии”

Наши партнёры

Digital Philosophy

Перейти на сайт

информационный спонсор

Сфера СПБ X Мск

СМИ о нас

Родительское собрание/Цифровое поколение

Общественное Телевидение России: ОТР

Смотреть

24.06.2024

Два способа извлечь звуки из данных: как и зачем

Официальный блог СПбГУ на площадке “Хабр”

Читать

24.05.2024

Вечные вопросы в информационном обществе: о цифровой философии

Подкаст “Генрих Терагерц” Подкаст СПбГУ

Слушать

20.03.2024

Философия. IT. Этика. Ценности. Технологии. Любовь. Медицина. Деньги. Право.

youtube подкаст “Научная тематика”

Смотреть

7.03.2024

В РФ сгенерировали музыку первого дня Великой Отечественной войны по дневникам

ТАСС

Читать

23.02.2024

Мы все умрём. Но это не точно Нищие киборги против всемогущих синдикатов. О чем на самом деле киберпанк

РИА Новости

Слушать

04.01.2024

Материалы к докладу

Tobias Rees; Non-Human Words: On GPT-3 as a Philosophical Laboratory. Tobias Rees; Non-Human Words: On GPT-3 as a Philosophical Laboratory. Daedalus 2022; 151 (2): 168–182. doi: https://doi.org/10.1162/daed_a_01908 I try to make compelling the argument that AI companies like OpenAI, Google, Facebook, or Microsoft effectively are philosophical laboratories (insofar as they disrupt the old concepts/ontologies we live by) and I ask what it would mean to build AI products from the perspective of the philosophical disruptions they provoke: can we liberate AI from the concept of the human we inherited from the past? -169-
I have come to think of the development of GPT-3 and its kin as a far-reaching, epoch-making philosophical event: the silent, hardly noticed undoing of the upuntil-now exclusive link between humans and words. The consequences of this undoing are sweeping: the entire modern world–the modern experience of what it is to be human, as well as the modern understanding of reality–is grounded in the idea that we humans are the only talking thing in a world of mute things. No longer -169-
If until then language was a divine gift that enabled humans to know the eternal essence/names of things, then now language became the human unique power to name things and to thereby order and know them and bring them under human control -169-
The exemplary place where this new concept of language–of humans–is articulated is René Descartes’s Discourse on the Method, published anonymously in 1630. -169- According to Descartes, language is a power only we humans possess, a power that sets us apart, in a qualitative, unbridgeable way from everything else there is, notably from animals and machines. It is the fact that we have language, for Descartes a proxy for reason (logos), that we humans are more than mere matter extended in space: we are subjects, capable of thought and knowledge. - 170-
-I understand that there are those who judge me to be naive. I am thinking of the many critics who have rejected, often with vehemence, the idea that GPT-3 really has words.-171-
If one asks the critics what true here refers to, the common answer is understanding meaning.1 What though does meaning, what does understanding, refer to? Why, and in what sense, does GPT-3 or other LLMs not have it? -171-
According to Bender, the intersubjective, intentional production and negotiation of that language is a quality unique to humans. Non-humans have “a priori no way to learn meaning.” -172-
In rough strokes, there have been three epochs in the history of how humans understand language and experience the capacity to speak: I call them ontology (words and being), representation (words and things), and existence (words and meaning). - 173-
One privileged form contemplation took was a careful analysis of language. The reason for this was the conviction that humans had language (logos) only insofar as they had received a spark of the divine logos–a divine logos that also organized reality: intrinsic in language was thus a path toward the real. All that was necessary was for humans to direct their thinking to the structure of language and thought (logos). As Aristotle puts it in his Peri Hermeneias: Spoken words are the symbols of mental experience and written words are the symbols of spoken words. Just as all men have not the same writing, so all men have not the same speech sounds, but the mental experiences, which these directly symbolize, are the same for all, as also are those things of which our experiences are the images. -173-
Aristotle’s assumption that language is the path to understanding being–that there is a direct correlation between words and things–builds directly on Plato’s theory of ideas, and remained the unchallenged reference for well over a thousand years. Things only began to change around 1300. -174
Words and things. The parting ways of words and things was a gradual and cumulative process. Its earliest indicator was the emergence of nominalism in the early fourteenth century, in the works of Peter Abelard and William von Ockham. Independently from one another, the two clerics wondered if perhaps words are not in reality arbitrary conventions invented by humans, rather than the way to true being. -174-
The second is that there is no timeless truth to the concept of the human and language upheld by critics of GPT-3. It is a historically contingent concept. To claim otherwise would mean to miss the historicity of the presuppositions on which the plausibility of the argument is dependent. To me, the importance of GPT-3 is that it opens up a whole new way of thinking about language–and about humans and machines–that exceeds the logical potential of argument that the critics uphold. GPT-3, that is, provides us with the opportunity to think and experience otherwise, in ways that are so new/different that they cannot be accommodated by how we have thought/experienced thus far. Once this newness is in the world, the old, I think, can no longer be saved. Though what is this newness? -177-

Leopold Aschenbrenner — formerly of OpenAI's Superalignment team, now founder of an investment firm focused on artificial general intelligence (AGI) — has posted a massive, provocative essay putting a long lens on AI's future. https://ai.gov.ru/knowledgebase/investitsionnaya-aktivnost/2024_osvedomlennosty_o_situacii_-_na_desyatiletie_vpered_situational_awareness_-_the_decade_ahead_leopold_aschenbrenner/
While the inference is simple, the implication is striking. Another jump like that very well could take us to AGI, to models as smart as PhDs or experts that can work beside us as coworkers. Perhaps most importantly, if these AI systems could automate AI research itself, that would set in motion intense feedback loops—the topic of the the next piece in the series. --9--
”unhobbling” gains: By default, models learn a lot of amazing raw capabilities, but they are hobbled in all sorts of dumb ways, limiting their practical value. With simple algorithmic improvements like reinforcement learning from human feedback (RLHF), chain-of-thought (CoT), tools, and scaffolding, we can unlock significant latent capabilities --19--
Reinforcement learning from human feedback (RLHF). Base models have incredible latent capabilities,25 but they’re raw and 25 That’s the magic of unsupervised learning, in some sense: to better predict the next token, to make perplexity go down, models learn incredibly rich internal representations, everything from (famously) sentiment to complex world models. But, out of the box, they’re hobbled: they’re using their incredible internal representations merely to predict the next token in random internet text, and rather than applying them in the best way to actually try to solve your problem.
incredibly hard to work with. While the popular conception of RLHF is that it merely censors swear words, RLHF has been key to making models actually useful and commercially valuable (rather than making models predict random internet text, get them to actually apply their capabilities to try to answer your question!). This was the magic of ChatGPT—well-done RLHF made models usable and useful to real people for the first time. The original InstructGPT paper has a great quantification of this: an RLHF’d small model was equivalent to a non-RLHF’d >100x larger model in terms of human rater preference.
• Chain of Thought (CoT). As discussed. CoT started being widely used just 2 years ago and can provide the equivalent of a >10x effective compute increase on math/reasoning problems.
• Scaffolding. Think of CoT++: rather than just asking a model situational awareness 31
to solve a problem, have one model make a plan of attack, have another propose a bunch of possible solutions, have another critique it, and so on. For example, on HumanEval (coding problems), simple scaffolding enables GPT-3.5 to outperform un-scaffolded GPT-4. On SWE-Bench (a benchmark of solving real-world software engineering tasks), GPT4 can only solve ~2% correctly, while with Devin’s agent scaffolding it jumps to 14-23%. (Unlocking agency is only in its infancy though, as I’ll discuss more later.) --30-31--

AI progress won’t stop at human-level.--47--
Academics are way worse than automated AI researchers: they can’t work at 10x or 100x human speed, they can’t read and internalize every ML paper ever written, they can’t spend a decade checking every line of code, replicate themselves to avoid onboarding-bottlenecks, etc.--60--
Superintelligence will be like this across many domains. It’ll find exploits in human code too subtle for any human to notice, and it’ll generate code too complicated for any human to understand even if the model spent decades trying to explain it--67--

Provide a decisive and overwhelming military advantage--70--
We’re going to drive the AGI datacenters to the Middle East, under the thumb of brutal, capricious autocrats. I’d prefer clean energy too—but this is simply too important for US national security. We will need a new level of determination to make this happen. The power constraint can, must, and will be solved. --85--
The clusters can be built in the US, and we have to get our act together to make sure it happens in the US. American national security must come first, before the allure of free-flowing Middle Eastern cash, arcane regulation, or even, yes, admirable climate commitments. --87

Reliably controlling AI systems much smarter than we are is an unsolved technical problem. And while it is a solvable problem, things could very easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could easily be catastrophic. --105-- That said, I’m worried fully reverse-engineering superhuman AI systems will just be an intractable problem—similar, to, say “fully reverse engineering the human brain”—and I’d put this work mostly in the “ambitious moonshot for AI safety” rather than “default plan for muddling through” bucket.[]--118--
“Top-down” interpretability. If mechanistic interpretability tries to reverse engineer neural networks “from the bottom up,” other work takes a more targeted, “top-down” approach, trying to locate information in a model without full understanding of how it is processed.--119--
Representation Engineering and Inference-time Interventions demonstrate using top-down techniques to detect lying and hallucinations and surgically control model behavior on jailbreaking, power-seeking, fairness, truthfulness, and more. There’s other creative work on lie detection that doesn’t even require model internals. --119--
Chain-of-thought interpretability. As mentioned earlier, I think it’s quite plausible that we’ll bootstrap our way to AGI with systems that “think out loud” via chains of thought --119--
But we also don’t have to solve this problem just on our own. If we manage to align somewhat-superhuman systems enough to trust them, we’ll be in an incredible position: we’ll have millions of automated AI researchers, smarter than the best AI researchers, at our disposal. Leveraging these army of automated researchers properly to solve alignment for even-more superhuman systems will be decisive.--121--
Superdefense
Security
Monitoring
Targeted capability limitations. As much as possible, we should try to limit the model’s capabilities in targeted ways that reduce fallout from failure.
Targeted training method restrictions. There are likely some ways of training models that are inherently riskier—more likely to produce severe misalignments—than others --123-124
Superintelligence will be the most powerful technology— and most powerful weapon—-mankind has ever developed--126
And those who herald the age of AGI in SF too often ignore the elephant in the room: superintelligence is a matter of national security, and the United States must win.--127
Most importantly, the robotic military and police force could be wholly controlled by a single political leader, and programmed to be perfectly obedient—no more risk of coups or popular rebellions. Whereas past dictatorships were never permanent, superintelligence could eliminate basically all historical threats to a dictator’s rule and lock in their power (cf value lock-in)--134--
I believe in freedom and democracy, strongly, because I don’t know what the right values are. In the long arc of history, “time has upset many situational awareness 135
fighting faiths.” I believe we should place our faith in mechanisms of error correction, experimentation, competition, and adaption --134-135
United States must lead, and use that lead to enforce safety norms on the rest of the world. That’s the path we took with nukes, offering assistance on the peaceful uses of nuclear technology in exchange for an international nonproliferation regime (ultimately underwritten by American military power)—and it’s the only path that’s been shown to work. --138--
The safety challenges of superintelligence would become extremely difficult to manage if you are in a neck-and-neck arms race.--138--
If and when it becomes clear that the US will decisively win, that’s when we offer a deal to China and other adversaries. They’ll know they won’t win, and so they’ll know their onlyoption is to come to the table; and we’d rather avoid a feverish standoff or last-ditch military attempts on their part to sabotage Western efforts. In exchange for guaranteeing noninterference in their affairs, and sharing the peaceful benefits of superintelligence, a regime of nonproliferation, safety norms, and a semblance of stability post-superintelligence can be born --138-139--
The US has a lead. We just have to keep it. And we’re screwing that up right now. Most of all, we must rapidly and radically lock down the AI labs, before we leak key AGI breakthroughs in the next 12-24 months (or the AGI weights themselves). We must build the compute clusters in the US, not in dictatorships that offer easy money. And yes, American AI labs have a duty to work with the intelligence community and the military --139--
--It seems to me that there is a real chance that the AGI endgame plays out with the backdrop of world war. Then all bets are off.--140--
--. We will need the government to ensure even a semblance of a sane chain of command; you can’t have random CEOs (or random nonprofit boards) with the nuclear button. We will need the government to manage the severe safety challenges of superintelligence, to manage the fog of war of the intelligence explosion. We will need the government to deploy superintelligence to defend against whatever extreme threats unfold, to make it through the extraordinarily volatile and destabilized international situation that will follow. We will need the government to mobilize a democratic coalition to win the race with situational awareness 143
authoritarian powers, and forge (and enforce) a nonproliferation regime for the rest of the world. I wish it weren’t this way—but we will need the government. (Yes, regardless of the Administration.) --141-142--

Вот ключевые идеи подхода Аристотеля к этике ИИ на русском языке с особым вниманием к сравнению человека и искусственного интеллекта: ИИ-системы следует разрабатывать и внедрять как интеллектуальные инструменты, которые повышают человеческое процветание, а не как средство преодоления или замены человеческой природы. У человеческих существ есть отличительная природа как особенно социальных, рациональных и коммуникативных животных, которая является основой этики. Этой природой не обладают нынешние или предсказуемые в обозримом будущем ИИ-системы. Основная цель политического сообщества - обеспечить общее благо и позволить процветать всем его членам как свободным и равным гражданам.
Этика требует более богатого понимания интеллекта, чем просто свободное от ценностей средство-целевое рассуждение. Она включает в себя оценку целей и надлежащих средств с моральной точки зрения. Существует необходимость в новом человеческом праве на человеческое решение для определенных значимых решений, влияющих на права, обязанности или основные интересы других. Это может запретить принятие решений ИИ или позволить отказаться от них/подать апелляцию. Практическая мудрость не может быть сведена к механическому применению правил, как алгоритмы, поскольку всегда будут непредвиденные обстоятельства, требующие контекстно-чувствительного суждения. В целом, аристотелевский подход рассматривает ИИ как инструмент для повышения человеческого процветания, когда он разрабатывается и используется ответственно на службе объективным этическим ценностям и общему благу, а не как средство преодоления или замены человеческой природы и принятия решений. -------
The key insights from the Aristotelian approach to AI ethics are: AI systems should be developed and deployed as intelligent tools that enhance human flourishing, not as means to transcend or replace human nature. Human beings have a distinctive nature as especially social, rational, and communicative animals that is the basis for ethics. This nature is not shared by current or foreseeable AI systems. The fundamental purpose of a political community is to secure the common good and enable the flourishing of all its members as free and equal citizens.v Ethics requires a richer conception of intelligence than just value-free means-end reasoning. It involves evaluating goals and appropriate means from a moral standpoint. There is a need for a novel human right to a human decision for certain significant decisions that impact rights, duties or basic interests of others. This could prohibit AI decision-making or allow opt-outs/appeals. Practical wisdom cannot be reduced to mechanical rule application like algorithms, as there will always be unforeseen circumstances requiring context-sensitive judgment. In summary, the Aristotelian approach sees AI as a tool to enhance human flourishing when developed and used responsibly in service of objective ethical values and the common good, not as a means to transcend or replace human nature and decision-making. --https://www.perplexity.ai/search/what-are-the-key-insights-mid0DxtPQEy0WBFGQCDJzA---
EXECUTIVE SUMMARY Many today take the view that the AI technological revolution is creating a radically new reality, one that demands a corresponding upheaval in our ethical thinking. This can generate a sense of helplessness in the face of rapid technological advances. But the idea that we start from an ethical blank slate in addressing the challenges and opportunities of this transformative technology is a fallacy. -2-
The Aristotelian approach constantly foregrounds the question of the good for which AI systems are developed and deployed and does not conceive of ethics as in competition with technological advance. -2-
he idea of AI systems as ‘intelligent tools’ to be deployed in order to advance the flourishing of individuals and communities, a focus very different from the dominant objective of the tech industry, which is to create Artificial General Intelligence that replicates human intellectual capacities across their entire spectrum. -3-
a case for a novel human right for the age of AI: a qualified right to a human decision that has the effect of prohibiting certain decisions from being made by AI systems, or else allowing opt-outs or appeals from such decisions where they are permissible. -3-
But if machine intelligence can eventually replicate or even out-perform human intelligence, where would that leave humans? Would the pervasive presence of AI in our lives be a negation of our humanity and an impediment to our ability to lead fulfilling human lives? Or can we incorporate intelligent machines into our lives in ways that dignify our humanity and promote our flourishing? It is this challenge, rather than the rather far-fetched anxiety about human extinction in a robot apocalypse, that is the most fundamental ‘existential’ challenge posed today by the powerful new forms of Artificial Intelligence. It is a challenge that concerns what it means to be human in the age of AI, rather than just one about ensuring the continued survival of humanity. -5-6-
We do already have rich ethical materials needed to engage with the challenges of the AI revolution, but to a significant degree they need to be rescued from present-day neglect, incorporated into our decision-making processes, and aced into dialogue with the dominant ideological frameworks that are currently steering the development of AI technologies - ideologies centred on the promotion of economic growth, maximising the fulfilment of subjective preferences, or complying with legal standards, such as human rights law. -6-
Ethics in the spirit of Aristotle, in short, is indispensable if we are to retain faith in our humanity in the age of AI. In making this case, this paper is divided into three parts. Part I: THE ARISTOTELIAN FRAMEWORK: HUMAN NATURE, ETHICS, AND POLITICAL COMMUNITY. The Aristotelian framework for thinking about AI consists of three core interlocking ideas: (1) that human beings possess a distinctive nature as especially rational, communicative, and social animals, a nature that is not shared by existing AI systems or any such systems that are likely to be developed in the foreseeable future, and that an understanding of human nature is the basis for our ethical thought; (2) that ethics concerns the flourishing of human beings (human well-being) as individuals and members of communities, and also what they owe others, including those outside their communities and non-human beings (morality). The flourishing of each and all requires that we be free to exercise the core capacities that distinguish us as a kind of being. First is: sociability; we are highly interdependent, requiring social cooperation with others for our material and moral well-being. Next is reason, regulation of beliefs, emotions, and choices in accordance with rational judgments about both means and ends. And finally, communication through language and other forms of symbolic expression. Forms of social organisation (institutions,norms) and technology (tools and the know-how that enables their use) that advance the prosocial use of reason and communication promote flourishing; those that impede that use degrade flourishing. (3) The fundamental purpose of a political community is to secure the common (joint and several) good, i.e. to furnish the material, institutional, educational, and other conditions that enable the flourishing of each and every one of its members as free and equal citizens. The free, cooperative, prosocial use of the core capacities of reason and communication, by the diverse members of a community, is pluralistic democracy. And so, the overarching political purpose of the Aristotelian human community is not only compatible with, but requires both democracy and a form of liberalism. Part II: AI SYSTEMS AS INTELLIGENT TOOLS. The positive vision of AI that emerges from the Aristotelian account is the idea that AI systems should be understood, developed, and deployed as 'intelligent tools' that enhance our ability to flourish as individuals and communities. They should not be regarded as means of transcending our human nature, or of creating a race of intelligent artificial beings with comparable ethical standing to humans (here Aristotle’s own profoundly mistaken justification of slavery is a powerful warning). Nor should we enable them to systematically replace valuable human endeavour with machines in domains such as work, personal relations, artistic activity, politics, and so on. In the words of the American philosopher, Daniel Dennett, AI systems are “intelligent tools, not colleagues” (nor, we would add, friends, lovers, or fellow citizens).3 This will involve characterising some of the fundamental ways in which AI systems differ from human beings with respect to capacities such as consciousness, understanding, autonomous agency, and reasoning. This part considers the idea of AI as an 'intelligent tool' in relation to three topics: (a) work and leisure, and (b) a radically participatory democratic culture. -7-8-
ART III: SIGN-POSTS FOR REGULATION. The final part argues that the Aristotelian framework, and the conception of AI systems as intelligent instruments that it supports, can help steer the regulation of the development and deployment of AI, whether through domestic law, international law, ‘soft’ norms such as the UN Guiding Principles on Business and Human Rights or industry-wide codes of conduct -8-9
human beings possess a distinctive nature as political (especially social, rational, and communicative) animals, a nature that is not shared by existing AI systems or any such systems that are likely to be developed in the foreseeable future, and that a grasp of human nature is the basis for our ethical thought; -10-
enjoins the free exercise of core human capacities, notably cooperative sociability, reason, and communication-10- fundamental purpose of a political community is to advance the common good of its members, -10-
in taking seriously the rootedness of human morality in human nature, the Aristotelian framework is at odds with some trends of thought that have been prominent in the world of AI. Among these is the project of creating Artificial General Intelligence (AGI), a form of AI that replicates human intellectual capabilities across the board, including responsiveness to moral concerns. If human morality is keyed to essential features of human nature, then it is hard to see how any being that did not share our human nature would be appropriately either subject or attuned to such a morality. Instructive here is the example of non-human animals and the Olympian gods whose behaviour is often morally grotesque when judged according to the standards of human morality. An objectively different nature entails the applicability of an objectively different morality -12-
being human, in a biologicasense, is an essential aspect of our identity as individuals, so that overcoming our human nature would effectively mean ceasing to exist as the distinct and interdependent individual that each of us is -12-13- The importance of choice. On the Aristotelian view, ethics is a domain of individual and collective human choices based on reason. -13-
Too often today, however, influential voices present the development of AI-based technologies and their increasing penetration into all domains of our life as inexorable processes over which we can exercise little or no control -13-
[A] thousand years of history and contemporary evidence make one thing abundantly clear: there is nothing automatic about new technologies bringing widespread prosperity. Whether they do or not is an economic, social, and political choice.10 -14-
10 Daron Acemoglu and Simon Johnson, Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity (Basic Books, 2023), pp. 6, 13. The whole book is a powerful critique of the ‘productivity bandwagon’ thesis that new machines and production methods that increase productivity inexorably generate widespread prosperity by increasing the demand for workers which in turn leads to higher wages The White Paper conceives of a ‘proportionate’ approach to regulation as balancing innovation and economic growth against various risks regarding safety, fairness, etc. Yet neither economic growth nor innovation are themselves ultimate values to be set against concerns such as fairness. https://www.gov.uk/government/publications/ai-regulation-aproinnovationapproach/whitepaper#:~:text=Pro%2Dinnovation%3A% 20enabling%20rather%20than,promote%20and%20encourage%20its%20uptake (March 29, 2023 -17- This is especially so when the objective of wealth-maximisation is stressed without attention to how that wealth is to be justly distributed -17-
Why is the Benthamite approach a hollowed out conception of ethics? To begin with, well-being does not reduce to subjective experiences. Pleasure matters, but so does acquiring understanding, valuable friendships, and achievement. Similarly, preferences may be ill-informed by the facts or skewed by prejudices of various sorts or the outgrowth of subjection to oppressive practices. Equally, there are serious challenges confronting the idea that what we morally ought to do is maximise overall well-being. From an Aristotelian standpoint, we need an ethics that is tailored to the human condition. The utilitarian idea that we have the ability to survey all the options available to us, to calculate which one will maximise overall well-being, and to act on the basis of that calculation, is a double fantasy. It flies in the face of our limited cognitive capacities and our limited capacity to sacrifice our personal interests to the impartial maximisation of welfare. -18-
The Aristotelian will agree, of course, that we need to operate with a conception of the common good, especially in political decision-making, but as we show below, it is one that differs profoundly from the utilitarian measurement of an aggregated and inherently subjective good-18-
By contrast, many within the world of AI adopt an impoverished conception of intelligence. This focuses on value-free means-end reasoning. Aristotle’s ethics was developed in contradistinction to the strategic reasoning taught by the Sophists -19- Aristotle called that sort of intelligence “cleverness” (deinotes NE 1147a24-28). He recognised it as an essential, but subsidiary part of practical reason. As he clearly saw, ethics requires a richer conception of intelligence, one that includes the evaluation of goals and of the morally appropriate means of pursuing them.18-19-
A related point is that much of the discourse of AI is about replacing human decision-making with algorithmic systems that will be more efficient and free from human biases. Against this tendency, we need to rediscover Aristotle’s idea that practical wisdom cannot be reduced to the mechanical application of rules, which is what an algorithm involves. Even the best rules we can devise, says Aristotle, will encounter unforeseen circumstances in which their rigid application would lead to baor even disastrous consequences (NE V.10) -20-
If we take seriously not only the objectivity of ethical values, but also their plurality, then we will also be led to see that often there is no single correct answer to a given ethical question, but rather a bounded range of equally acceptable answers. Objectivity does not imply singularity. -20-
PART II AI SYSTEMS AS INTELLIGENT TOOLS Within the Aristotelian framework, AI is first and foremost a technology, one with potential for both harmful and beneficial uses, which needs to be responsibly incorporated into our lives as an enabler of individual and collective human flourishing. -27-
AI systems can have a place in our lives, both as individuals and communities, as intelligent tools, as instruments for human use, rather than as systematic replacements for human endeavours. Aristotelian ethics shows us the point of AI, what AI is good for: a tool for advancing human flourishing, for enabling human beings to employ our capacities more freely and fully. We should not try to create AGI, machines with human (or superhuman) reason, and if we did, we could not morally use them as tools (even if we retained the power to do so). Doing so would be to replicate Aristotle’s fundamental errors about “slaves by nature.” -28-
The field of Artificial Intelligence involves the development of algorithms embodied in computer programs. These algorithms can simulate functions that normally require intelligence when done by humans, such as identifying an image as that of a malignant tumour, translating from one language to another, assessing the risk of a creditor defaulting, writing a poem, and so on. Algorithms are mechanical procedures for solving a given problem by means of a finite series of steps. They are ‘mechanical’ procedures in the sense that they require no resort to judgement in their operation; every step in the procedure is precisely determined. Moreover, what we now call Artificial Intelligence is a form of technology capable of simulating the relevant human functions in a way that exhibits a considerable degree of autonomy, at least in the minimal sense that the operations of such systems do not require human guidance beyond a certain point and can produce outcomes that are neither fully controlled nor predictable by the designers and deployers of these systems -29-
AI systems has a significant environmental footprint Kate Crawford, ‘Generative AI is guzzling water and energy’, Nature 626 (2024), p. 693.-30- The immense cost of training AI systems, in terms of data and compute power, raises questions about whether such resources might be better deployed in alternative ways as well as legitimate fears about the risks attendant upon the concentration of such greater power in the hands of a small number of tech companies whose incentives are not obviously aligned with the common good. -30-
Our intellectual capabilities are rooted in the fact that we live a human life among other humans: we share a world with other humans with whom we also share a biological nature and membership of various communities with their established practices and traditions. -31-
[I]n principle any virtue, expressed in language or explicit decisions, can be simulated. Although at present these simulations are only partly convincing, that will change reasonably soon. AIs today can also effectively be deputies for us in situations that call for those virtues. What will not change in the foreseeable future is an AI’s inability to truly possess a virtue. A soldier who defuses a bomb has courage. The robots already deployed by security forces around the world to defuse bombs do not have courage.-33-
The better they are at embodying human virtues, the greater the accuracy (and sometimes the value) of the 39 Stuart Russell, Human Compatible: AI and the Problem of Control (Penguin, 2020), p.167. See discussion of the 'paperclip maximiser' in Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014), p. 123-124. 33
simulation, the more important it will be to define the ways in which they are not the real thing [Nigel Shadbolt and Roger Hampson, As If Human: Ethics and Artificial Intelligence (Yale University Press, 2024), p.9] -34-
the slave is necessary as an animate tool that employs subsidiary tools to do necessary work - a tool that, in a puzzling turn of phrase, “shares in reason (logos) so far as to perceive it but not to possess it” (Politics 1254b22-23). -35-
In the realm of work, we are free to reject Aristotle’s belief that working at the direction of others, or for the sake of others’ enjoyment (e.g. in musical performance: Politics 1341b9-14), is inherently illiberal, and as such degrades our moral capacity-37- After all, the typical objectives of games (e.g. putting a ball in a hole, crossing an arbitrary line before others) may be trivial or valueless. Only when the value of ends comes into view – the valuable goods and services produced through work – can we grasp why the sense of achievement derived through work as a nurse, plumber, teacher, farmer etc. cannot for the great majority of people be satisfactorily replaced by proficiency at activities such as chess, golf or table-tennis.[For these two objections, see John Tasioulas, ‘Games and the Good’, Proceedings of the Aristotelian Society supp. Vol. 80 (2006) 237. In line with the first objection, however, play is a substantive good itself that the playing of games can realise. See also John Tasioulas, ‘Work and Play in the Shadow of AI’, in David Edmonds (ed), AI Morality (OUP, forthcoming).] -40-
Acts or activities that apply what he [Aristotle] calls a rational principle aim at something worthwhile by drawing upon faculties and dispositions whose exercise gives pleasure (a distinctive, associated pleasure) to the doer and enlarges also – here I reach beyond Aristotle – the doer’s understanding of the realities we inhabit. That is to say that the exercise of these faculties or dispositions affords both practical understanding of those realities and the satisfactions that we attain by learning to wrestle or struggle with them.[ David Wiggins, ‘Work, its moral meaning and import’, Philosophy 89 (2014), p.479] -40- The idea here is that work affords a distinct form of understanding that emerges through contact with an independent and potentially recalcitrant physical reality--40--
A focus on economic growth, through such measures as GDP, fails to reflect all the value we derive from work, notably, achievement, friendship, self-esteem, and an understanding of the world around us by engaging with it to produce goods and services --41--
How might AI enable more participatory forms of democracy today? One way forward is by adapting Athenian political principles and institutions to our contemporary conditions--45
First, by facilitating subsidiarity: AI could sort salient issues according to 67 Josiah Ober, ‘Democracy’s dignity’, American Political Science Review, 106.4 (2012), pp. 827-846. 45
the locus of their greatest impact and provide for moderated deliberation among and decision by those most directly affected – whether the relevant locus was geographic or otherwise. Next, in cases where the locus of impact was larger, and the population of those directly impacted too great for efficient “face to face” deliberation and decision, AI could help to select an agenda-setting and advisory council by efficiently identifying truly random samples of larger populations. The chosen members of the council could then be provided with AI generated expert opinion tailored to the issues at hand, enabling them to deliberate among themselves, and make a collective decision accordingly. The basic approach is modelled on a deliberative “mini-public” which sets its own agenda and makes policy recommendations to a larger assembly in which all affected citizens are able to participate.[Hèléne Landemore, Open Democracy: Reinventing Popular Rule for the Twenty-First Century (Princeton, 2020).] At the level of the affected-citizen assembly, properly monitored (by responsible human agents) AI could again serve as a 'trusted expert' that would provide essential informational inputs to each citizen, tailored to each individual’s learning processes. Each citizen would receive the same curated information, but in language and format that would be most accessible to each. --46--
certainly no discrete body of law worthy of the name ‘law of AI’, no more than there is a ‘law of the horse’, for all the tremendous impact of this animal on human history.--49--
For example, is it realistic to suppose that true AGI – an AI system that was genuinely capable of replicating human cognitive functioning across its entire range of operation – would be so utterly morally obtuse as to annihilate all human beings in order to fulfil an instruction to produce paper clips?--52--
The overall conclusion is that while it cannot be dismissed, the worry about existential risk is seriously over-hyped relative to the other, more concrete and imminent, challenges posed by AI, such as the way it perpetuates and deepens existing injustices, its potentially devastating impact on democracy, work opportunities, and the prospects for living in a world characterised by meaningful forms of human interactio--53--
Anu Bradford has identified as underlying the regulatory strategies of the world’s three “digital empires” that are currently engaged in a struggle for global regulatory domination[Anu Bradford, Digital Empires: The Global Battle to Regulate Technology (Oxford University Press, 2023).] She calls these three templates “state-driven” (China), “market-driven” (United States), and “rights-driven -- 54--
Does an Aristotelian approach to ethics and politics offer any resources for the kind of trans-national cooperation necessary for the global regulation of AI? The answer is yes, by extrapolation from Aristotle’s understanding of interdependence and self-sufficiency. Aristotle’s conception of the polis as a natural end was predicated on the fact of human interdependence, the need we humans have of one another for material existence (living), which is a precondition for moral flourishing (living well). As such, the natural impulse of the individual to pursue personal flourishing is necessarily linked to social existence. --57--
in the age of AI, we need to recognise at least one novel human right: a human right to a human decision.87 The general category of decisions we have in mind are those which--61--
A Human Right to a Human Decision There are many momentous issues that potentially fall within the scope of the global regulation of AI, which include but are not limited to: limiting or prohibiting the use of AI in military contexts, ensuring that the most powerful AI models are open source for the purposes of transparency and accessibility, mitigating the environmental impact of AI, and so on. Our white paper has sought not to make specific regulatory proposals, but rather to articulate a general ethical framework that can guide us in shaping and evaluating such proposals. Part of that framework is a key role for democratic publics whose deliberations and decision-making may be assisted but not preempted by the contributions of philosophers, technologists or experts of other kinds. However, we now wish to conclude with one global regulatory proposal that resonates strongly both with the distinctive ethical challenges posed by AI and the distinctively humanistic preoccupations of the Aristotelian framework. This is the proposal that, in the age of AI, we need to recognise at least one novel human right: a human right to a human decision.87 The general category of decisions we have in mind are those which bear significantly on the rights, duties, or basic interests of others. Illustrative cases include decisions to hire someone, to sentence a criminal to imprisonment, to determine eligibility to receive a loan or social welfare benefits, to admit to a university, to assign priority in the allocation of medical treatment, and so on. Affirming a human right to a human decision involves the idea that with respect to certain decisions within this general class either (a) they should not be made by an AI system, or (b) if it is permissible for them to be made by an AI system, those potentially subject to its decisions should have the power either to (i) opt out of an AI-based decision-making process in favour of a human decision, or (ii) appeal from the AI-based decision to a human decision-maker. Moreover, the claim is not only that it is in some general sense wrong to deviate from these requirements, but rather that there is a human right to adherence to these requirements. This means that there is a moral right, on the part of each human being, that these requirements be complied with in their case.88 In addition, this moral right furnishes a basis for incorporating such a 88 John Tasioulas, ‘A Human Right to a Human Decision’ (forthcoming) 87 The idea of such a right is gaining traction in various jurisdictions, including the EU and the US. Article 22 of the European Union’s General Data Protection Regulation sets out a qualified ‘right not to be subject to a decision based solely on automated processing’. https://gdpr-info.eu/ Meanwhile, the Blueprint for an AI Bill of Rights, published by the White House Office of Science and Technology Policy in 2022, includes the guideline that individuals should be able to ‘opt out from automated systems in favour of a human alternative, where appropriate’. https://www.whitehouse.gov/ostp/ai-bill-of-rights--61-- he mooted benefits this would bring include (a) potentially huge efficiency gains, since these AI systems will be faster and cheaper than human judges, (b) potentially more accurate decisions, given that AI systems will not be vulnerable to human cognitive and other biases or vulnerabilities such as the need for sleep, as well as (c) more consistent decisions, ensuring that like cases are treated alike, in contrast to the ‘noise’ (unwanted variability) that notoriously afflicts legal decision-making by humans.91--62--
Moreover, looking to outcomes beyond the quality of decision, the widespread deployment of AI adjudicative tools risks having unwelcome side-effects. Their use may lead to the atrophying of judicial virtues among humans who will have been deprived of the opportunity to develop their capacity for legal judgement in the context of real-life decision-making rather than the law school classroom--63--
in our discussions of work and democracy in Part II. Those activities are valued not just because of the valuable outcomes they produce e.g. income or good law, but also because of the intrinsic value of the processes through which these outcomes are achieved. These are processes exemplifying achievement, for example, or respectful collective deliberation and decision-making about the common good among free and equal citizens. Similarly, in the case of legal adjudication, we value not only legally sound decisions and efficient processes, but processes that exhibit certain other intrinsic values that AI systems are not well-placed to exemplify. Here we mention just three: explainability, accountability, and solidarity.93
To begin with, we seek not just a correct legal decision, but also an explanation for it, one that provides a causally effective justification for it. By contrast, the workings of machine learning algorithms are often opaque, even to their designers, due to the potentially astronomical number of statistical patterns among vast amounts of data that may be involved. And even when they reach the correct result, they may do so by means of a route that does not connect the decision with the good reasons for it. Consider the AI system Lex Machina that can successfully predict the outcome of patent litigation at a level comparable to top patent lawyers, but uses justificatorily ‘irrelevant’ factors such as the monetary value of the claims and the names of the judge and the litigants as bases for prediction.94 Litigants rightly desire not just a correct decision but a justification for it that is not just an ex post rationalisation but the operative cause of the decision rendered by the judge. As we saw in Part I, it is of the essence of Aristotelian virtue that the right thing is done for the sake of the reasons that make it right, and out of a settled disposition to act for those reasons. Even if an AI system were developed that could provide a causally efficacious justification for its decisions, we are still far away from any such system being accountable for its decisions--64--

Sander Schulhoff1∗ Michael Ilie1∗ Nishant Balepur1 Konstantine Kahadze1 Amanda Liu1 Chenglei Si3 Yinheng Li4 Aayush Gupta1 HyoJung Han1 Sevien Schulhoff1 Pranav Sandeep Dulepet1 Saurav Vidyadhara1 Dayeon Ki1 Sweta Agrawal11 Chau Pham12 Gerson Kroiz Feileen Li1 Hudson Tao1 Ashay Srivastava1 Hevander Da Costa1 Saloni Gupta1 Megan L. Rogers7 Inna Goncearenco8 Giuseppe Sarli8,9 Igor Galynker10 Denis Peskoff6 Marine Carpuat1 Jules White5 Shyamal Anadkat2 Alexander Hoyle1 Philip Resnik1 1 University of Maryland 2 OpenAI 3 Stanford 4 Microsoft 5 Vanderbilt 6 Princeton 7 Texas State University 8 Icahn School of Medicine 9 ASST Brianza 10 Mount Sinai Beth Israel 11 Instituto de Telecomunicações 12 University of Massachusetts Amherst sschulho@umd.edu milie@umd.edu resnik@umd.edu

While prompting is a widespread and highly researched concept, there exists conflicting terminology and a poor ontological understanding of what constitutes a prompt due to the area’s nascency. (1)
Transformer-based LLMs are widely deployed in consumer-facing, internal, and research settings (4)
(Bommasani et al., 2021) Knowing how to effectively structure, evaluate, and perform other tasks with prompts is essential to using these models. Empirically, better prompts lead to improved results across a wide range of tasks (Wei et al., 2022; Liu et al., 2023b; Schulhoff, 2022). A large body of literature has grown around the use of prompting to improve results and the number of prompting techniques is rapidly increasing. However, as prompting is an emerging field, the use of prompts continues to be poorly understood, with only a fraction of existing terminologies and techniques being well-known among practitioners(4)
Additionally, we refined our focus to hard (discrete) prompts rather than soft (continuous) prompts and leave out papers that make use of techniques using gradient-based updates (i.e. fine-tuning).(4)
Many multilingual and multimodal prompting techniques are direct extensions of English text-only prompting techniques. As prompting techniques grow more complex, they have begun to incorporate external tools, such as Internet browsing and calculators. We use the term ‘agents‘ to describe these types of prompting techniques (Section 4.1).(4)
Finally, we apply prompting techniques in two case studies (Section 6.1). In the first, we test a range of prompting techniques against the commonly used benchmark MMLU (Hendrycks et al., 2021). In the second, we explore in detail an example of manual prompt engineering on a significant, real-world use case, identifying signals of frantic hopelessness–a top indicator of suicidal crisis–in the text of individuals seeking support (Schuck et al., 2019a). We conclude with a discussion of the nature of prompting and its recent development (Section 8).(5)
1.1 What is a Prompt? A prompt is an input to a Generative AI model, that is used to guide its output (Meskó, 2023; White et al., 2023; Heston and Khun, 2023; Hadi et al., 2023; Brown et al., 2020). Prompts may consist of text, image, sound, or other media.(5)
Prompt Template Prompts are often constructed via a prompt template (Shin et al., 2020b). A prompt template is a function that contains one or more variables which will be replaced by some media (usually text) to create a prompt. This prompt can then be considered to be an instance of the template. (5)
Directive Many prompts issue a directive in the form of an instruction or question.1 This is the core intent of the prompt, sometimes simply called the "intent". For example, here is an example of a prompt with a single instruction: Tell me five good books to read.(5) Directives can also be implicit(5)
Examples Examples, also known as exemplars or shots, act as demonstrations that guide the GenAI to accomplish a task. The above prompt is a OneShot (i.e. one example) prompt.(5) Output Formatting It is often desirable for the GenAI to output information in certain formats, for example, CSVs or markdown formats (Xia et al., 2024).(5)
Style Instructions Style instructions are a type of output formatting used to modify the output stylistically rather than structurally (Section 2.2.2).(6) Additional Information It is often necessary to include additional information in the prompt. For example, if the directive is to write an email, you might include information such as your name and position so the GenAI can properly sign the email. Additional Information is sometimes called ‘context‘ (6)
Prompting Prompting is the process of providing a prompt to a GenAI(6) Prompt Chain A prompt chain (activity: prompt chaining) consists of two or more prompt templates used in succession. The output of the prompt generated by the first prompt template is used to parameterize the second template, continuing until all templates are exhausted (Wu et al., 2022).(6)
Prompting Technique A prompting technique is a blueprint that describes how to structure a prompt, prompts, or dynamic sequencing of multiple prompts. A prompting technique may incorporate conditional or branching logic, parallelism, or other architectural considerations spanning multiple prompts. Prompt Engineering Prompt engineering is the iterative process of developing a prompt by modifying or changing the prompting technique that you are using (Figure 1.4). Prompt Engineering Technique A prompt engineering technique is a strategy for iterating on a prompt to improve it. In literature, this will often be automated techniques (Deng et al., 2022), but in consumer settings, users often perform prompt engineering manually. Exemplar Exemplars are examples of a task being completed that are shown to a model in a prompt (Brown et al., 2020)(7) The term Prompt Engineering appears to have come into existence more recently from Radford et al. (2021) then slightly later from Reynolds and McDonell (2021). (7)
1 Systematic Review Process In order to robustly collect a dataset of sources for this paper, we ran a systematic literature review grounded in the PRISMA process (Page et al., 2021) (Figure 2.1). We host this dataset on HuggingFace and present a datasheet (Gebru et al., 2021) for the dataset in Appendix A.3. Our main data sources were arXiv, Semantic Scholar, and ACL. We query these databases with a list of 44 keywords narrowly related to prompting and prompt engineering (Appendix A.4).(8) 2.2.1 In-Context Learning (ICL) ICL refers to the ability of GenAIs to learn skills and tasks by providing them with exemplars and or relevant instructions within the prompt, without the need for weight updates/retraining (Brown et al., 2020; Radford et al., 2019b). These skills can be learned from exemplars (Figure 2.4) and/or instructions (Figure 2.5). Note that the word ’learn’ is misleading. ICL can simply be task specification– the skills are not necessarily new, and can have already been included in the training data (Figure 2.6). (8)
Few-Shot Prompting (Brown et al., 2020) is the paradigm seen in Figure 2.4, where the GenAI learns to complete a task with only a few examples (exemplars)(8) Few-Shot Learning (FSL) (Fei-Fei et al., 2006; Wang et al., 2019) is often conflated with Few-Shot Prompting (Brown et al., 2020).(8) Few-Shot Prompting Design Decisions (10) Exemplar Quantity Exemplar Ordering Exemplar Label Distribution As in traditional supervised machine learning, the distribution of exemplar labels in the prompt affects behavior. For example, if 10 exemplars from one class and 2 exemplars of another class are included, this may cause the model to be biased toward the first class.
Exemplar Label Quality Despite the general benefit of multiple exemplars, the necessity of strictly valid demonstrations is unclear (10) Exemplar Format The formatting of exemplars also affects performance. One of the most common formats is “Q: {input}, A: {label}”, but the optimal format may vary across tasks (10) Exemplar Similarity (11)
Self-Generated In-Context Learning (SG-ICL) (Kim et al., 2022) leverages a GenAI to automatically generate exemplars. While better than zeroshot scenarios when training data is unavailable(11) Prompt Mining (Jiang et al., 2020) is the process of discovering optimal "middle words" in prompts (effectively prompt templates) through large corpus analysis.(11)
2.2.2 Zero-Shot In contrast to Few-Shot Prompting, Zero-Shot Prompting uses zero exemplars.
>Разница между zero-shot и few-shot обучением заключается в следующем: Zero-shot обучение - это когда модель решает задачу без каких-либо примеров, только на основе описания задачи в промпте. Модель должна обобщить свои знания и применить их к новой задаче, которую она никогда не видела. Few-shot обучение - это когда модель обучается на небольшом количестве примеров (например, 1-10 примеров) новой задачи. Модель использует эти немногочисленные примеры, чтобы быстро адаптироваться и решать новую задачу. Основное отличие в том, что zero-shot обучение не требует никаких примеров, в то время как few-shot обучение использует небольшое количество примеров для быстрой адаптации к новой задаче. Zero-shot подход более сложен для модели, но few-shot позволяет ей быстрее обучаться на ограниченных данных.(perplexity) Role Prompting Style Prompting Emotion Prompting
System 2 Attention (S2A) (Weston and Sukhbaatar, 2023) first asks an LLM to rewrite the prompt and remove any information unrelated to the question therein.
SimToM (Wilf et al., 2023) deals with complicated questions which involve multiple people or objects. Given the question, it attempts to establish the set of facts one person knows, then answer the question based only on those facts. This is a two prompt process and can help eliminate the effect of irrelevant information in the prompt. (12)
Rephrase and Respond (RaR) (Deng et al., 2023) instructs the LLM to rephrase and expand the question before generating the final answer (12)
Re-reading (RE2) (Xu et al., 2023) adds the phrase "Read the question again:" to the prompt in addition to repeating the question. Although this is such a simple technique, it has shown improvement in reasoning benchmarks, especially with complex questions.(12)
Self-Ask (Press et al., 2022) prompts LLMs to first decide if they need to ask follow up questions for a given prompt. If so, the LLM generates these questions, then answers them and finally answers the original question(12)
>Согласно исследованию, стандартный подход к использованию LLMs, когда пользователь просто задает вопрос или дает инструкцию, может быть недостаточным. Вместо этого авторы предлагают следующий многоэтапный процесс: Определение необходимости уточняющих вопросов: Сначала LLM должна решить, нужно ли ей задать дополнительные уточняющие вопросы, чтобы лучше понять исходный запрос пользователя. Генерация уточняющих вопросов: Если LLM решит, что нужны уточнения, она должна сгенерировать соответствующие вопросы. Ответы на уточняющие вопросы: Пользователь отвечает на сгенерированные LLM вопросы, чтобы предоставить дополнительную информацию. Ответ на исходный запрос: Теперь, когда LLM получила больше контекста, она может дать окончательный ответ на первоначальный запрос пользователя. Chain-of-Thought (CoT) Prompting (Wei et al., 2022) leverages few-shot prompting to encourage the LLM to express its thought process before delivering its final answer.5 This technique is occasionally referred to as Chain-of-Thoughts (Tutunov et al., 2023; Besta et al., 2024; Chen et al., 2023d). It has been demonstrated to significantly enhance the LLM’s performance in mathematics and reasoning tasks. In Wei et al. (2022), the prompt includes an exemplar featuring a question, a reasoning path, and the correct answer (Figure 2.8).(12) Хорошо, попробую объяснить Chain-of-Thought (CoT) prompting простыми словами: Представьте, что вы даете задачу большой языковой модели (LLM), например, решить математическую задачу. Обычно LLM просто выдаст ответ, не объясняя, как она к нему пришла. Но с CoT prompting все по-другому. В этом случае вы даете LLM пример задачи, где показан не только ответ, но и весь процесс рассуждений, шаг за шагом, который привел к этому ответу. Увидев такой пример, LLM учится сама генерировать подобную "цепочку рассуждений" при решении новых задач. Она не просто выдает ответ, а объясняет, как она к нему пришла. Это помогает LLM лучше понимать задачу и находить более точные решения, особенно для сложных задач, требующих многошаговых рассуждений. Модель как бы "думает вслух", проговаривая свои мысли, прежде чем дать ответ. Таким образом, CoT prompting позволяет сделать взаимодействие с LLM более понятным и прозрачным, а также значительно повышает ее эффективность на задачах, требующих логического мышления. 2.2.3.1 Zero-Shot-CoT
The most straightforward version of CoT contains zero exemplars. It involves appending a thought inducing phrase like "Let’s think step by step." (Kojima et al., 2022) to the prompt. (12) Step-Back Prompting (Zheng et al., 2023c) is a modification of CoT where the LLM is first asked a generic, high-level question about relevant concepts or facts before delving into reasoning. This approach has improved performance significantly on multiple reasoning benchmarks for both PaLM2L and GPT-4. Analogical Prompting (Yasunaga et al., 2023) is similar to SG-ICL, and automatically generates exemplars that include CoTs. It has demonstrated improvements in mathematical reasoning and code generation tasks. Thread-of-Thought (ThoT) Prompting (Zhou et al., 2023) consists of an improved thought inducer for CoT reasoning. Instead of "Let’s think step by step," it uses "Walk me through this context in manageable parts step by step, summarizing and analyzing as we go."(12)
Tabular Chain-of-Thought (Tab-CoT) (Jin and Lu, 2023) consists of a Zero-Shot CoT prompt that makes the LLM output reasoning as a markdown (12)
2.2.3.2 Few-Shot CoT
Contrastive CoT Prompting (Chia et al., 2023) adds both exemplars with incorrect and correct explanations to the CoT prompt in order to show the LLM how not to reason. This method has shown significant improvement in areas like Arithmetic Reasoning and Factual QA.(13)
Active Prompting (Diao et al., 2023) starts with some training questions/exemplars, asks the LLM to solve them, then calculates uncertainty (disagreement in this case) and asks human annotators to rewrite the exemplars with highest uncertainty(13)
2.2.4 Decomposition Least-to-Most Prompting (Zhou et al., 2022a)
starts by prompting a LLM to break a given problem into sub-problems without solving them. Then, it solves them sequentially, appending model responses to the prompt each time, until it arrives at a final result (13)
Decomposed Prompting (DECOMP) (Khot et al., 2022) Few-Shot prompts a LLM to show it how to use certain functions. These might include things like string splitting or internet searching; (13)
Plan-and-Solve Prompting (Wang et al., 2023f) consists of an improved Zero-Shot CoT prompt, "Let’s first understand the problem and devise a plan to solve it. Then, let’s carry out the plan and solve the problem step by step". (13)
Tree-of-Thought (ToT) (Yao et al., 2023b), also known as Tree of Thoughts, (Long, 2023), creates a tree-like search problem by starting with an initial problem then generating multiple possible steps in the form of thoughts (as from a CoT)(13)
Recursion-of-Thought (Lee and Kim, 2023) is similar to regular CoT. However, every time it encounters a complicated problem in the middle of its reasoning chain, it sends this problem into another prompt/LLM call(14)
Program-of-Thoughts (Chen et al., 2023d) uses LLMs like Codex to generate programming code as reasoning steps. A(14)
2.2.5 Ensembling
In GenAI, ensembling is the process of using multiple prompts to solve the same problem, then aggregating these responses into a final output.(14) Max Mutual Information Method (Sorensen et al., 2022) creates multiple prompt templates with varied styles and exemplars, then selects the optimal template as the one that maximizes mutual information between the prompt and the LLM’s outputs(14)
2.2.6 Self-Criticism When creating GenAI systems, it can be useful to have LLMs criticize their own outputs (Huang et al., 2022). (15)
Self-Calibration (Kadavath et al., 2022) first prompts an LLM to answer a question. Then, it builds a new prompt that includes the question, the LLM’s answer, and an additional instruction asking whether the answer is correct.(15)
Chain-of-Verification (COVE) (Dhuliawala et al., 2023) first uses an LLM to generate an answer to a given question. Then creates a list of related questions that would help verify the correctness of the answer. Each question is answered by the LLM, then all the information is given to the LLM to produce the final revised answer. This method has shown improvements in various question-answering and text-generation tasks.(15)
2.4 Prompt Engineering In addition to surveying prompting technique, we also review prompt engineering techniques, which are used to automatically optimize prompts(17) Meta Prompting is the process of prompting a LLM to generate or improve a prompt or prompt template (Reynolds and McDonell, 2021; Zhou et al., 2022b; Ye et al., 2023). AutoPrompt (Shin et al., 2020b) uses a frozen LLM as well as a prompt template that includes some "trigger tokens", whose values are updated via backpropogation at training time. This is a version of soft-prompting. Automatic Prompt Engineer (APE) (Zhou et al., 2022b) uses a set of exemplars to generate a ZeroShot instruction prompt. It generates multiple possible prompts, scores them, then creates variations of the best ones (e.g. by using prompt paraphrasing). It iterates on this process until some desiderata are reached. Gradientfree Instructional Prompt(17)
2.5 Answer Engineering Answer engineering is the iterative process of developing or selecting among algorithms that extract precise answers from LLM outputs.(17)
Answer Shape,Answer Space, Answer Extractor In cases where it is impossible to entirely control the answer space (e.g. consumer-facing LLMs), or the expected answer may be located somewhere within the model output, a rule can be defined to extract the final answer. This rule is often a simple function (e.g. a regular expression), but can also use a separate LLM to extract the answer.(18)
Separate LLM Sometimes outputs are so complicated that regexes won’t work consistently. In this case, it can be useful to have a separate LLM evaluate the output and extract an answer.(18)
1 Multilingual State-of-the-art GenAIs have often been predominately trained with English dataset, leading to a notable disparity in the output quality in languages other than English, particularly low-resource languages (Bang et al., 2023; Jiao et al., 2023; Hendy et al., 2023; Shi et al., 2022). As a result, various multilingual prompting techniques have emerged in an attempt to improve model performance in non-English settings. (19)
Translate First Prompting (Shi et al., 2022) is perhaps the simplest strategy and first translates non-English input examples into English.(19)
In-CLT (Cross-lingual Transfer) Prompting (Kim et al., 2023) leverages both the source and target languages to create in-context examples, diverging from the traditional method of using source language exemplars. This strategy helps stimulate the cross-lingual cognitive capabilities of multilingual LLMs, thus boosting performance on crosslingual tasks.(19) PARC (Prompts Augmented by Retrieval Crosslingually) (Nie et al., 2023) introduce a framework that retrieves relevant exemplars from a high resource language. This framework is specifically designed to enhance cross-lingual transfer performance, particularly for low-resource target languages. Li et al. (2023g) extend this work to Bangla. 3.1.4 Prompt Template Language Selection In multilingual prompting, the selection of language for the prompt template can markedly influence the model performance.(19)
Task Language Prompt Template In contrast, many multilingual prompting benchmarks such as BUFFET (Asai et al., 2023) or LongBench (Bai et al., 2023a) use task language prompts for language-specific use cases. Muennighoff et al. (2023) specifically studies different translation methods when constructing native-language prompts. They demonstrate that human translated prompts are superior to their machine-translated counterparts -20-
Chain-of-Dictionary (CoD) (Lu et al., 2023b) first extracts words from the source phrase, then makes a list of their meanings in multiple languages, automatically via retrieval from a dictionary (e.g. English: ‘apple’, Spanish: ‘manzana’). Then, they prepend these dictionary phrases to the prompt, where it asks a GenAI to use them during translation. Dictionary-based Prompting for Machine Translation (DiPMT) (Ghazvininejad et al., 2023) works similarly to CoD, but only gives definitions in the source and target languages, and formats them slightly differently.-20-
3.1.5.1 Human-in-the-Loop Interactive-Chain-Prompting (ICP) (Pilault et al., 2023) deals with potential ambiguities in translation by first asking the GenAI to generate sub-questions about any ambiguities in the phrase to be translated. Humans later respond to these questions and the system includes this information to generate a final translation. Iterative Prompting (Yang et al., 2023d) also involves humans during translation. First, they prompt LLMs to create a draft translation. This initial version is further refined by integrating supervision signals obtained from either automated retrieval systems or direct human feedback. -21-
Negative Prompting allows users to numerically weight certain terms in the prompt so that the model considers them more/less heavily than others. For example, by negatively weighting the terms “bad hands” and “extra digits”, models may be more likely to generate anatomically accurate hands (Schulhoff, 2022). -21- Definition of Agent In the context of GenAI, we define agents to be GenAI systems that serve a user’s goals via actions that engage with systems outside the GenAI itself.7 -23-
Modular Reasoning, Knowledge, and Language (MRKL) System (Karpas et al., 2022) is one of the simplest formulations of an agent.-23-
Reasoning and Acting (ReAct) (Yao et al. (2022)) generates a thought, takes an action, and receives an observation (and repeats this process) when given a problem to solve. All of this information is inserted into the prompt so it has a memory of past thoughts, actions, and observations. Reflexion (Shinn et al., 2023) builds on ReAct, adding a layer of introspection-24-
4.1.3.1 Lifelong Learning Agents Work on LLM-integrated Minecraft agents has generated impressive results, with agents able to acquire new skills as they navigate the world of thisopen-world videogame. We view these agents not merely as applications of agent techniques to Minecraft, but rather novel agent frameworks which can be explored in real world tasks that require lifelong learning. Voyager (Wang et al., 2023a) is composed of three parts. First, it proposes tasks for itself to complete in order to learn more about the world. Second, it generates code to execute these actions. Finally, it saves these actions to be retrieved later when useful, as part of a long-term memory system. This system could be applied to real world tasks where an agent needs to explore and interact with a tool or website (e.g. penetration testing, usability testing). Ghost in the Minecraft (GITM) -24-
4.1.4 Retrieval Augmented Generation (RAG) In the context of GenAI agents, RAG is a paradigm in which information is retrieved from an external source and inserted into the prompt. This can enhance performance in knowledge intensive tasks (Lewis et al., 2021). When retrieval itself is used as an external tool, RAG systems are considered to be agents. Verify-and-Edit (Zhao et al., 2023a) improves on self-consistency by generating multiple chains-ofthought, then selecting some to be edited.-24- Demonstrate-Search-Predict (Khattab et al., 2022) first decomposes a question into subquestions, then uses queries to solve them and combine their responses in a final answer. It uses few-shot prompting to decompose the problem and combine responses.-25-
Interleaved Retrieval guided by Chain-ofThought (IRCoT) (Trivedi et al., 2023) is a technique for multi-hop question answering that interleaves CoT and retrieval-25-
Iterative Retrieval Augmentation techniques, like Forward-Looking Active REtrieval augmented generation (FLARE) (Jiang et al., 2023) and Imitate, Retrieve, Paraphrase (IRP) (Balepur et al., 2023), perform retrieval multiple times during longform generation.-25- In-Context Learning is frequently used in evaluation prompts, much in the same way it is used in other applications (Dubois et al., 2023; Kocmi and Federmann, 2023a). -25 Role-based Evaluation is a useful technique for improving and diversifying evaluations (Wu et al., 2023b; Chan et al., 2024). By creating prompts with the same instructions for evaluation, but different roles, it is possible to effectively generate diverse evaluations.-25-
Chain-of-Thought prompting can further improve evaluation performance (Lu et al., 2023c; Fernandes et al., 2023).-26- Model-Generated Guidelines (Liu et al., 2023d,h) prompt an LLM to generate guidelines for evaluation. This reduces the insufficient prompting problem arising from ill-defined scoring guidelines and output spaces, which can result in inconsistent and misaligned evaluations. Liu et al.-26- 4.2.2 Output Format The output format of the LLM can significantly affect evaluation performance Gao et al. (2023c). Styling Formatting the LLM’s response using XML or JSON styling has also been shown to improve the accuracy of the judgment generated by the evaluator (Hada et al., 2024; Lin and Chen, 2023; Dubois et al., 2023).
Linear Scale A very simple output format is a linear scale (e.g. 1-5). Binary Score-26-
Likert Scale Prompting the GenAI to make use of a Likert Scale (Bai et al., 2023b; Lin and Chen, 2023; Peskoff et al., 2023) can give it a better understanding of the meaning of the scale-26-
> Likert Scale Prompting - это техника, которая помогает языковым моделям (GenAI) лучше понимать и использовать шкалу Ликерта при ответах на вопросы. Шкала Ликерта - это широко используемый метод измерения мнений, отношений и восприятий. Она предлагает респондентам выбрать ответ на вопрос из нескольких вариантов, обычно от "полностью не согласен" до "полностью согласен". 4.2.3 Prompting Frameworks LLM-EVAL (Lin and Chen, 2023) is one of the simplest evaluation frameworks. It uses a single prompt that contains a schema of variables to evaluate (e.g. grammar, relevance, etc.), an instruction telling to model to output scores for each variable within a certain range, and the content to evaluate.-26-
Prompt Injection is the process of overriding original developer instructions in the prompt with user input (Schulhoff, 2024; Willison, 2024; Branch et al., 2022; Goodside, 2022).-28-
! Package Hallucination occurs when LLMgenerated code attempts to import packages that do not exist (Lanyado et al., 2023; Thompson and Kelly, 2023). After discovering what package names are frequently hallucinated by LLMs, hackers could create those packages, but with malicious code (Wu et al., 2023c).! Detectors are tools designed to detect malicious inputs and prevent prompt hacking. Many companies have built such detectors (ArthurAI, 2024; Preamble, 2024; Lakera, 2024), which are often built using fine-tuned models trained on malicious prompts. Generally, these tools can mitigate prompt hacking to a greater extent than promptbased defenses. -29-
5.2.1 Prompt Sensitivity Prompt Wording can be altered by adding extra spaces, changing capitalization, or modifying delimiters. -30- Task Format describes different ways to prompt an LLM to execute the same task.-30- Prompt Drift (Chen et al., 2023b) occurs when the model behind an API changes over time, so the same prompt may produce different results on the updated model. A-30-
Overconfidence and Calibration Verbalized Score is a simple calibration technique that generates a confidence score (e.g. “How confident are you from 1 to 10”), but its efficacy is under debate. - 30-
Sycophancy refers to the concept that LLMs will often express agreement with the user-30-
3 Biases, Stereotypes, and Culture LLMs should be fair to all users, such that no biases, stereotypes, or cultural harms are perpetuated in model outputs (Mehrabi et al., 2021). Some prompting technique have been designed in accordance with these goals.-31- Vanilla Prompting (Si et al., 2023b) simply consists of an instruction in the prompt that tells the LLM to be unbiased. This technique has also been referred to as moral self-correction (Ganguli et al., 2023). Selecting Balanced Demonstrations (Si et al., 2023b) -31
Cultural Awareness (Yao et al., 2023a) can be injected into prompts to help LLMs with cultural adaptation (Peskov et al., 2021). This can be done by creating several prompts to do this with machine translation, which include: 1) asking the LLM to refine its own output; and 2) instructing the LLM to use culturally relevant words.-31- AttrPrompt (Yu et al., 2023) is a prompting technique designed to avoid producing text biased towards certain attributes when generating synthetic data-31- 5.2.4 Ambiguity Questions that are ambiguous can be interpreted in multiple ways, where each interpretation could result in a different answer (Min et al., 2020). -31- Ambiguous Demonstrations Gao et al. (2023a) are examples that have an ambiguous label set. Including them in a prompt can increase ICL performance.-31-
Question Clarification (Rao and Daumé III, 2019) allows the LLM to identify ambiguous questions and generate clarifying questions to pose to the user.-31-
2 Prompt Engineering Case Study Prompt engineering is emerging as an art that many people have begun to practice professionally, but the literature does not yet include detailed guidance on the process. As a first step in this direction, we present an annotated prompt engineering case study for a difficult real-world problem.-33-
First, prompt engineering is fundamentally different from other ways of getting a computer to behave the way you want it to: these systems are being cajoled, not programmed, and, in addition to being quite sensitive to the specific LLM being used, they can be incredibly sensitive to specific details in prompts without there being any obvious reason those details should matter. Second, therefore, it is important to dig into the data (e.g. generating potential explanations for LLM “reasoning” that leads to incorrect responses). Related, the third and most important take-away is that prompt engineering should involve engagement between the prompt engineer, who has expertise in how to coax LLMs to behave in desired ways, and domain experts, who understand what those desired ways are and why-41-
Santu and Feng (2023) provide a general taxonomy of prompts that can be used to design prompts with specific properties to perform a wide range of complex tasks Bubeck
et al. (2023) qualitatively experiment with a wide range of prompting methods on the early version of GPT-4 to understand its capabilities . Meskó (2023) and Wang et al. (2023d) offer recommended use cases and limitations of prompt engineering in the medical and healthcare domains.
Hua et al. (2024) use a GPT-4-automated approach to review LLMs in the mental health space. Moreover, we base our work in the widely well-received standard for systematic literature reviews — PRISMA (Page et al., 2021).
https://arxiv.org/pdf/2305.11430