AI, Law, and Regulation (ALR) Section
The International Neural Networks Society, the pioneering and cross-disciplinary organization, is known to advance the most advanced forms of artificial intelligence and machine learning. Recognizing the global interest in the benefits and concerns around their market proliferation, AI policy and legislative frameworks are being developed and introduced worldwide. Significant efforts in AI governance through regulation and the extent to which it will influence AI systems design calls for the close collaboration of legal and technical communities to navigate AI’s regulatory landscape. The Section on AI, Law, and Regulation seeks to foster a community of technology law scholars and practitioners, AI developers, and AI policymakers to bridge the gap and establish strong collaborative partnerships to effectively and pragmatically approach AI governance.
Membership
Membership is open to all INNS members interested in the nexus of AI, Law, and Regulation. This section will be of interest to all model developers, especially those who wish to deploy their models in the market. Close collaborators working on the Section’s activities may be invited to the AI, Law, and Regulation (‘ALR’) Pioneers Network.
Governance
The AI, Law, and Regulation (‘ALR’) INNS Section is governed by:
- Co-Chairs: Asim Roy, Professor, Information Systems, Arizona State University; Nicola Fabiano (Lawyer, Adj. Professor Ostrava University, Member of the United Nations University Artificial Intelligence Network - UNU AI Network);
- Section Coordinator: Amanda Horzyk, AI Law Researcher, University of Edinburgh (responsible for the strategy, direction and implementation of the Section);
- AI, Law, and Regulations Pioneers Group: Nicola Fabiano (Lawyer, Adj. Professor Ostrava University), Maja Nišević (Postdoctoral Researcher at the Centre for IT & IP Law, Ku Leuven), Alexander Kriebitz (Postdoctoral Researcher at Technical University of Munich), Martin Ebers (President of the Robotics & AI Law Society, IT Law Professor, University of Tartu), Vagelis Papakonstantinou (Attorney, Law Professor, Vrije University), Burkhard Schafer (Professor of Computational Legal Theory, University of Edinburgh), Lachlan Urquhart (Senior Lecturer in Technology Law and HCI, University of Edinburgh), Nicole Inverardi (Data and AI Ethicist), Mikołaj Barczentewicz (Ass. Professor, University of Surrey), Mikolaj Firlej (investor, philosopher, and academic lawyer working on responsible AI).
Activities
The activities of the AI, Law, and Regulation Section will involve:
- Annual AI, Law, and Regulation Special Sessions and Workshops: Collaborating on ALR special sessions during conferences, facilitating knowledge-sharing, networks, and disseminating research findings.
- AI Policy Panels and Special Section Discussions: Hosting open discussions to constructive conversations with prominent speakers involved in AI policy and regulation.
- Awareness and Dissemination INNS Newsletter entries: Regular updates on the most pertinent developments in the AI Regulation to help the INNS community navigate and anticipate the AI policy, law, and enforcement landscape.
- Promote Policy Engagement Opportunities for AI experts: Circulate calls for evidence to inform future law and policy.
- Training Courses and Regulatory AI Webinars: Applied AI Ethics and Regulatory AI Literacy series for AI developers and researchers
- Recognition: Pioneering an Ethical AI Award scheme, developing ethical AI criteria, and launching the Ethical AI challenge.
Objectives:
The primary objectives of the ALR section are:
- Build Regulatory AI Literacy: Assist AI developers and researchers in navigating the complex regulatory AI landscape and understanding how it can impact their work.
- Foster Community Discussions: Open a platform for discussions through multi-stakeholder and interdisciplinary approaches to hear from researchers, legal academics, and industry representatives on the current and imminent AI regulatory landscape and its impact on how we design, develop, and deploy neural network models.
- Inform Law and Policy: Facilitate a constructive dialogue between policy, lawmakers, AI developers, and industry.
- Facilitate Interdisciplinary Collaboration: Encourage and facilitate connections and projects between technology law scholars, lawyers, policy actors, AI developers and researchers in the implementation and development of AI regulations.
- Knowledge Dissemination and Best Practices: Identifying, promoting, and sharing best practices to meet legal obligations and policy and law aspirations.
Scope
The ALR Section will focus on bridging the interdisciplinary gap of socio-legal and AI research to connect critical perspectives on the governance frameworks for AI: regulatory considerations and challenges, industry standards, best practices, and how AI undermines current legal assumptions and how these will be addressed. These include but are not limited to:
System Input Aspects of Intellectual Property (Copyright Protection, etc.), Data Protection and Privacy, and Transparency requirements for Generative and Predictive models, etc.
- Exploiting Big Data through data mining, harvesting, and web scraping for commercial versus non-commercial purposes.
- Compliance in profiling, aggregation, big data management, anonymization, and pseudonymization practices.
- Transparency of input for generative models (Large language models, etc.) and automatic data harvesting.
- Safeguarding Intellectual Property: issues of authorship, inventorship, ownership, infringement and administration.
- Information security and privacy protection concerns: purpose limitations and data minimization.
- Ascertaining consent and respecting the human rights of individuals.
Model Architecture Aspects of explainability, transparency, interpretability of decision-making processes, model safety (cyber security) and accountability (liability, negligence, etc.), practices, and obligations.
- Explainability requirements for high-risk applications
- Decision-making trustworthiness and transparency obligations
- Data Protection by design and default
- Privacy Engineering
- Prompt Engineering
- Development of responsible HCI-inspired architectures
- Risk management and systematic review: interpreting and assessing the risk of AI systems and automated data processing
- Prohibited AI practices
- Regulation of Algorithmic Bias and Discrimination
- Regulation of systems with problematic assumptions (emotion recognition, real-life recognition, etc.)
System Output Aspects of regulating the disruption arising from AI applications impact various industries, including finance, healthcare, education, media, advertisement, etc.
- Generative AI content disclosure, labelling, and communication, online safety
- Accountability measures for output (illegal content, deepfakes, hallucinations, etc.)
- Disclosure of content manipulation
- Governance and implementation of regulatory requirements (e.g., watermarking, hashing, data encryption for AI-Generated content)
- Governance of decision-making and algorithmic data processing in recruitment, law enforcement, critical infrastructure, credit scoring, etc.
How to join?
Here’s the link to the membership application form:
|