AI Workforce Development
Created May 2022
A well-developed, knowledgeable AI workforce accelerates any organization’s ability to gain the leap-ahead advantages AI promises. At the SEI, we bring the latest academic advances at Carnegie Mellon University to real world challenges faced by defense and national security organizations to advance the professional discipline of AI engineering. Through tailored interactive workshops, we share our expertise with AI teams, practitioners, and leaders.
Doing AI as Well as AI Can Be Done...At Scale
Creating, deploying, and maintaining AI solutions requires unique skillsets and mindsets, and organizations including the U.S. Department of Defense, the National Security Commission on AI, and Georgetown Center for Security and Emerging Technology have identified the shortage of AI talent as a challenge to creating reliable AI solutions.
As the SEI leads the development of a community to accelerate the discipline of AI engineering, we are surfacing the needs of organizations in not only creating AI mission solutions but also approaching the use of AI from an engineering point of view to enable teams to create reliable AI solutions again and again: How do you create human-centered, scalable, robust, and secure AI solutions? How do you know if AI is right for your problem? How do teams implement ethical AI principles? Who do we need on AI teams?
AI and the Workforce
Is Your Organization Ready for AI?
In this conversation , digital transformation on lead Dr. Rachel Dzombak and research scientist Carol Smith, both with the SEI’s Emerging Technology Center at Carnegie Mellon University, discuss how AI Engineering can support organizations to implement AI systems. The conversation covers the steps that organization s need to take (as well as the hard conversation s that need to occur) before they are AI ready.
Time required: 30 minutes
5 Ways to Start Growing an AI Engineering Workforce
This blog post discusses growth in the field of artificial intelligence (AI) and how organizations can hire and train staff to take advantage of the opportunities afforded by AI and machine learning—and the critical need for an AI engineering discipline to grow the AI workforce.
Time required: 10 minutes
Tailored Learning for Teams
The SEI has developed several workshops for teams at the request of organizations in a variety of sectors. These workshops can be tailored to your needs and mission challenges, and most can be delivered in formats that range from half a day to a week. Contact us to bring our experts to your team or to request a workshop on a topic not listed here.
- Introduction to AI Engineering
What does it take to create AI systems that are human-centered, robust and secure, and scalable? Drawing on case studies from the Department of Defense and industry, instructors will introduce frameworks and resources for how to design, develop, deploy, and maintain transformative and trustworthy AI.
- Problem Framing for AI
This workshop equips teams to ask questions that drill into the root cause of problems, foster empathy for problem stakeholders, understand where and how technology fits in, and ultimately achieve innovative outcomes that leverage AI systems.
- Where to Start with AI Ethics
In this workshop, you’ll learn how to implement AI ethics, tools, and practices to get your team to coalesce around shared goals.
- Essential Skillsets for Data Technicians
Leaders and managers of AI teams and projects will learn how to go beyond lists of academic or technical qualifications to spot the perspectives they need to steward the data that drives their AI solutions.
- Data and Tactical ML Pipelines
This workshop provides technicians with an introduction to understanding of the importance of data, the flow of data to an application, and how data pipelines effect models.
- AI for Leaders
This course covers how leaders can enable organizations to identify, develop, and integrate AI applications to obtain the efficiencies of performance improvements enabled by these technologies. It will also help them understand the risks, ethical concerns, harms, and biases of using AI applications throughout their lifecycles.
Three Pillars of AI Engineering
Human-Centered AI
This white paper discusses Human-Centered AI: systems that are designed to work with, and for, people. As the desire to use AI systems grows, human-centered engineering principles are critical to guide system development toward effective implementation and minimize unintended consequences.
Time required: 15 minutes
Robust & Secure AI
This white paper discusses Robust and Secure AI systems: AI systems that reliably operate at expected levels of performance, even when faced with uncertainty and in the presence of danger or threat. These systems have built-in structures, mechanisms, or mitigations to prevent, avoid, or provide resilience to dangers from a particular threat model.
Time required: 15 minutes
Download the White Paper
Scalable AI
This white paper discusses Scalable AI: the ability of AI algorithms, data, models, and infrastructure to operate at the size, speed, and complexity required for the mission. Scalability is a critical concept in many engineering disciplines and is crucial to realizing operational AI capabilities.
Time required: 15 minutes
Learn More
Creating a Large Language Model Application Using Gradio
December 04, 2023 Blog Post
Tyler Brooks
This post explains how to build a large language model across three primary use cases: basic question-and-answer, question-and-answer over documents, and document...
readMeasuring the Trustworthiness of AI Systems
October 16, 2023 Podcast
Katherine-Marie Robinson, Carol J. Smith, Alexandrea Steiner
Carol Smith, Katie Robinson, and Alex Steiner discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if they want to employ a new AI...
learn moreA Retrospective in Engineering Large Language Models for National Security
September 29, 2023 White Paper
Shannon Gallagher, Andrew O. Mellinger, Jasmine Ratchford, Nick Winski, Tyler Brooks, Eric Heim, Nathan M. VanHoudnos, Swati Rallapalli, William Nichols, Bryan Brown, Angelique McDowell, Hollen Barmer
This document discusses the findings, recommendations, and lessons learned from engineering a large language model for national security use...
readAsk Us Anything: Generative AI Edition
September 25, 2023 Webcast
Douglas Schmidt (Vanderbilt University), John E. Robert, Rachel Dzombak, Jasmine Ratchford, Matthew Walsh, Shing-hon Lau
In this webcast, SEI researchers answered audience questions and discussed what generative AI does well and the associated risk and...
watchEvaluating Trustworthiness of AI Systems
September 12, 2023 Webcast
Carol J. Smith, Carrie Gardner
In this webcast, SEI researchers discuss how to evaluate trustworthiness of AI systems given their dynamic nature and the challenges of managing ongoing responsibility for maintaining...
watchContextualizing End-User Needs: How to Measure the Trustworthiness of an AI System
July 17, 2023 Blog Post
Carrie Gardner, Katherine-Marie Robinson, Carol J. Smith, Alexandrea Steiner
As potential applications of artificial intelligence (AI) continue to expand, the question remains: will users want the technology and trust it? This blog post explores how to measure the trustworthiness of...
readThe Challenge of Adversarial Machine Learning
May 15, 2023 Blog Post
Matt Churilla, Nathan M. VanHoudnos, Robert W. Beveridge
This SEI Blog post examines how machine learning systems can be subverted through adversarial machine learning, the motivations of adversaries, and what researchers are doing to mitigate their...
readAI Next Generation Architecture
March 16, 2023 Webcast
Michael Mattarock
During this webcast, Mike Mattarock discusses some of the primary quality attributes guiding design, and how a Next Generation Architecture can facilitate an integrated future...
watchPlay it Again Sam! or How I Learned to Love Large Language Models
February 13, 2023 Blog Post
Jay Palat
This post explores what new advancements in AI and large language models mean for software...
readBridging the Gap between Requirements Engineering and Model Evaluation in Machine Learning
December 15, 2022 Blog Post
Violet Turri, Eric Heim
Requirements engineering for machine learning (ML) is not standardized and considered one of the hardest tasks in ML development. This post defines a simple evaluation framework centered around validating...
read