As artificial intelligence (AI) continues to proliferate across manufacturing, economic, medical, aerospace, transportation, and social realms, ethical guidelines must be established to not only protect humans at the mercy of automated decision making, but also autonomous agents themselves, should they become conscious. While AI appears "smart" to the public, and may outperform humans on specific tasks, the truth is that today’s AI lacks insight beyond the restricted scope of problems to which it has been tasked. Without context, AI is effectively incapable of comprehending the true nature of what it does and is oblivious to the reverberations it may cause in the real world should it err in prediction. Despite this, future AI may be equipped with enough sensors and neural processing capacity to acquire a dynamic cognizance more akin to humans. If this materializes, will autonomous agents question their own position in this world? One must entertain the possibility that this is not merely hypothetical but may, in fact, be imminent if humanity succeeds in creating artificial general intelligence (AGI).If autonomous agents with the capacity for artificial consciousness are delegated grueling tasks, outcomes could mirror the plight of exploited workers, result in retaliation, failure to comply, alternative objectives, or breakdown of human-autonomy teams. It will be critical to decide how and in which contexts various agents should be utilized. Additionally, delineating the meaning of trust and ethical consideration between humans and machines is problematic because descriptions of trust and ethics have only been detailed in human terms. This means autonomous agents will be subject to anthropomorphism, but robots are not humans, and their experience of trust and ethics might be markedly distinct from humans. Ideally speaking, to fully entrust a machine with human-centered tasks, one must believe that such an entity is reliable, competent, has the appropriate priorities in decision-making, and can comprehend the consequences of actions taken. Such qualities may depend on conscious awareness—but without first deciphering what consciousness is in the first place, humans may fail to accurately identify consciousness in machines.This work explores the foundations of consciousness from the perspective of evolutionary biologists, neuroscientists, and philosophers, and strives to position degrees of consciousness in a trust and ethical consideration framework to guide AI usage and research. To mitigate foreseeable risks in autonomy, the authors seek to spark dialogue and preventative action, so proper legal and operational requirements can be established prior to any agent acquiring even an inkling of consciousness. If implemented correctly, such measures may reduce the likelihood of unintentional damage and help to secure a future of continued collaboration and shared success between humans and machines.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Trust, Ethics, Consciousness, and Artificial Intelligence


    Beteiligte:


    Erscheinungsdatum :

    2022-09-18


    Format / Umfang :

    965654 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch