Lieber Institute West Point Logo
  • ABOUT
    • ABOUT LIEBER
    • PARTNERSHIPS
    • TEAM
    • CONTACT US
  • EVENTS
  • ARTICLES OF WAR
  • LIEBER STUDIES
  • RESEARCH
  • EDUCATION
  • About
    • ABOUT ARTICLES OF WAR
    • Editorial board
    • Submission guidelines
  • Authors
  • Topics
    • Accountability
    • Conflict Types
    • Cyber
    • Detention
    • Emerging Technologies
    • Humanitarian Protection
    • Interpretation & Development
    • LOAC History
    • Space Law
    • Targeting
    • Weapons Law
    • Use of Force
  • Series
    • “Attack” Symposium
    • Justice for Syria
    • Hays Parks Symposium
    • Oxford Compliance Forum
    • Ukraine-Russia Symposium
Year Ahead – Emerging Technologies and the Collection of Battlefield Evidence

Year Ahead – Emerging Technologies and the Collection of Battlefield Evidence

by Chris Jenks, Eric Jensen | Jan 13, 2023

Year Ahead – Emerging Technologies and the Collection of Battlefield Evidence In a recent series of posts on responsible artificial intelligence (AI), various authors (including Chris Jenks) have discussed the role of “responsibility” in the context of using of...
Year Ahead – The Hurdles to International Regulation of AI Tools

Year Ahead – The Hurdles to International Regulation of AI Tools

by Ashley Deeks | Jan 5, 2023

Year Ahead – The Hurdles to International Regulation of AI Tools In 2023, non-governmental organizations such as Human Rights Watch and Stop Killer Robots will continue their calls for a new international legal framework to regulate autonomous weapons systems....
Responsible AI Symposium – Responsible AI and Legal Review of Weapons

Responsible AI Symposium – Responsible AI and Legal Review of Weapons

by Michael W. Meier | Dec 27, 2022

Responsible AI Symposium – Responsible AI and Legal Review of Weapons Editor’s note: The following post highlights a subject addressed at an expert workshop conducted by the Geneva Centre for Security Policy focusing on Responsible AI. For a general introduction to...
Responsible AI Symposium – The AI Ethics Principle of Responsibility and LOAC

Responsible AI Symposium – The AI Ethics Principle of Responsibility and LOAC

by Chris Jenks | Dec 21, 2022

Responsible AI Symposium – The AI Ethics Principle of Responsibility and LOAC Editor’s note: The following post highlights a subject addressed at an expert workshop conducted by the Geneva Centre for Security Policy focusing on Responsible AI. For a general...
Responsible AI Symposium – Legal Implications of Bias Mitigation

Responsible AI Symposium – Legal Implications of Bias Mitigation

by Juliette François-Blouin | Dec 7, 2022

Responsible AI Symposium – Legal Implications of Bias Mitigation Editor’s note: The following post highlights a subject addressed at an expert workshop conducted by the Geneva Centre for Security Policy focusing on Responsible AI. For a general introduction to this...
Responsible AI Symposium – Implications of Emergent Behavior for Ethical AI Principles for Defense

Responsible AI Symposium – Implications of Emergent Behavior for Ethical AI Principles for Defense

by Daniel Trusilo | Nov 30, 2022

Responsible AI Symposium – Implications of Emergent Behavior for Ethical AI Principles for Defense Editor’s note: The following post highlights a subject addressed at an expert workshop conducted by the Geneva Centre for Security Policy focusing on Responsible AI. For...
« Older Entries

lieber institute

MADN-LAW
United States Military Academy 

646 Swift Road
West Point, NY 10996

+1-845-938-2572

Articles of War Disclaimer

The views expressed are those of the authors, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense. 

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published.

west point logo

Connect

  • Follow
  • Follow
email
 Twitter
 Facebook
 LinkedIn
 Copy
 Email