Automatic Encoding From Natural Language to First-Order Horn Clauses
Master thesis
Permanent lenke
https://hdl.handle.net/11250/3139356Utgivelsesdato
2024-06-03Metadata
Vis full innførselSamlinger
- Master theses [248]
Sammendrag
A central topic within the field of machine ethics, and other fields where moral reasoning is needed, is to incorporate large quanta of (moral) rules represented in natural language using a formal specification that can be used for reasoning. The problem lies in the difficulty of automatically encoding natural language to logic. The task demands a need for knowledge engineers to manually encode moral rules, a solution that is expensive, with a low degree of efficiency and scalability. With the introduction of large language models comes the opportunity to explore the advantages they may provide in this task. We investigate how we can create a system able to automatically encode natural language norms to logic using GPT-4 as the main encoding element. We use norms from the Commonsense Norm Bank from the literature which contains a collection of both common sense and counter-intuitive norms and manually analyse how well GPT- 4 is able to encode these norms to first-order logic (FOL). Additionally, we convert all FOL-encoded formulas to Horn clauses and analyse what types of norms that cannot be expressed in Horn. We find that our encoding system is able to encode 51% of the norms accurately to FOL in terms of syntactic properties and semantic faithfulness. We find that the mistakes GPT-4 makes for the remaining 49% of the encodings are numerous and of varying degrees of predictability and severity. We show how the implementation of a sentiment analysis model can serve as an mitigation tool to one of these distinct mistakes.