AI in Government and Public Sector Fall Symposium
Preliminary Call for Participation (Deadline July 26)
Conference November 7-9, 2019, Washington DC
The democratization of AI has begun. AI is no longer reserved for a few highly specialized enterprises. As free, easy-to-deploy AI models proliferate, we see that simple, localized, but nonetheless very useful AI applications are beginning to pervade society. Government and the public sector are not immune from this trend.
However, AI in Government and the Public Sector faces unique challenges and opportunities. These systems will be held to a higher standard since they are supposed to operate for the “public good.” They face increased scrutiny for transparency, fairness, explainability, and operation without unintended consequences. Governments provide critical services and are expected to be the provider of last resort, sometimes backstopping the commercial sector. How can the development, deployment, and use of these systems be managed to ensure they meet these requirements by design and in practice?
This symposium will focus on these unique elements of government and public sector AI systems. We invite contributions addressing topics including:
- Early areas for adoption of AI – What public sector problems exist where AI can play a large/important role without deep new experimentation? How can socially desirable challenges be configured to leverage AI’s strengths, in, for example, fighting terrorism, better serving vulnerable populations, understanding acquisition regulations, combating child trafficking, life-long education and training, etc.
- Using AI to encourage public service innovation – What areas are less immediately approachable by AI, but still pose an urgent need, and hence offer a significant enough financial and social reward to justify experimentation by public administrators? One example is administrative law cases in which there is a large need for AI/Automation due to the huge number of backlogged cases. What can be done to facilitate the use of AI in Government (e.g., standards) and what might hinder adoption of AI that the community might correct?
- Trust and Transparency – Ensuring transparency and comprehensibility in the governmental use of AI, to avoid anti-democratic preferential access and treatment to select members of society. This includes questions such as open data and accountability of both officials and AI systems, as well as open source code and almost certainly open models and training methods. The debate on whether open source is safer or less safe than closed source may be explored. Which presents a greater danger of hacking and external manipulation? What polices might be needed to mitigate problems and facilitate adoption?
- Robust & Resilient – Ensuring that AI systems are designed and built to be robust and resilient in the face of systemic, cyber, external manipulation and deception, and other risks. Are redundant AI systems a solution? While open code requirements allow the detection of back doors and other problems, how this will work with trained AI models? How to harden AI-based models from model poisoning designed to misdirect or bias results?
- Bias – Developing AI methods to support auditing to detect bias, and then benchmark any efforts to mitigate unwanted bias. What might be done to detect and mitigate unwanted bias in, for example, machine learning or data collection?
- Role of Public-Private Partnerships – What is the role of public-private partnerships in researching, creating, and operating AI systems for government? Should AI R&D institutes be created to enable multi- disciplinary research with academia and industry and provide a conduit for early adoption and transition of AI technologies in government? What other approaches should be considered to accelerate the development and adoption of AI in government? How can one establish and foster public-private partnerships around AI to the benefit of both?
- Verification and Validation for Deep Learning – Validation of deep learning models in government applications. Often the correctness of the classifications that a deep learning model implements is ultimately derived from regulation or some other complex text. How do we validate these models, when human interpretation is so much a part of the correctness criteria? As models continuously learn, how do we validate that they still meet their original specifications?
- Translating from .com to .gov – The reality is that .com adoption is happening faster than .gov adoption of AI. What best practices and approaches can be transferred from the .com experience to benefit .gov?
- Interaction Paradigms – Insights about various paradigms for AI usage in government operations, such as intelligence augmentation/human-machine collaborative approaches, various levels of autonomy, methods for
handling uncertainty / conflicting evidence and opinion, various types of users (government employees,
general public, elected officials).
Systematic Approach for the use of AI in government – Policies, methodologies, guides or elements in support of such use (e.g. taxonomies, ontologies). In deploying AI technologies to improve government operations, there can often be a tension between effectiveness and protecting ownership and control rights to information, both directly provided and derived, about private sector citizens and other entities, especially since worldwide governments regulate such issues differently. What are these tensions and trade-offs and how can they be addressed?
Privacy – Factoring into AI-based models Privacy issues as they relate to GDPR and other National and State regulations, compliance
Leveraging AI innovations from open source – There are hundreds of open source
Cultivating AI literacy – The relationship between the public sector and AI will benefit from a widespread acceptance of what constitutes AI. How can we have a productive conversation with the public? Would the conversation around AI benefit from having criteria for deciding when it is most productive to talk about AI as opposed to various closely related terms such as “modern automation”, “machine learning”, “software”, “computer science”, etc.?
AI engineering best practices – The increasing prevalence of machine learning in automation exposes AI to real-world data, raises concerns about data drift, data poisoning, adversarial AI, and more. The increasing complexity of probabilistic models and data pipelines raises the cost of understanding a system well enough to fix it when it breaks.
Incentivizing AI engineering best practices – The ability of the government and public sector to leverage AI depends in part on the availability of AI implementations that attain the highest levels of transparency, in terms of the documentation, the modularity of implementation, adherence to potential standards. How should the government incent appropriate discourse and resolution of these issues? Should this happen under the umbrella of academia or elsewhere?
Submissions The symposium will include presentations of accepted papers in both oral and panel discussion formats, together with invited speakers and software demonstrations. Potential symposium participants are invited to submit either a full-length technical paper or a short position paper for discussion. Full-length papers must be no longer than eight (8) pages, including references and figures. Short submissions can be up to four (4) pages in length and describe speculative work, work in progress, system demonstrations, or panel discussions.
Please submit via the AAAI EasyChair.org site choosing the AAAI/FSS-19 Artificial Intelligence in Government and Public Sector track ( https://easychair.org/conferences/?conf=fss19 ). Please submit by July 26. Contact Frank Stein (fstein@us.ibm.com) with any questions.
Organizing Committee Frank Stein, IBM (Chair); Mihai Boicu, GMU; Lashon Booker, Mitre; Michael Garris, NIST; Mark Greaves, PNNL; Ibrahim Haddad, Linux Foundation; Anupam Joshi, UMBC; Zach Kurtz, SEI; Shali Mohleji, IBM; Tien Pham, CCDC ARL; Greg Porpora, IBM; Alun Preece, Cardiff University; Jim Spohrer, IBM;