AI Bright Spot: Optimizing space mission planning with AI at NASA May 14, 2026 The Partnership for Public Service AI Center for Government® is publishing a series of blogs to celebrate how artificial intelligence and intelligent automation are being used in government to serve the public. We recently spoke with Rita Sambruna, deputy director of the Astrophysics Science Division at NASA’s Goddard Space Flight Center and an alumna of the Partnership’s AI Government Leadership Program. In our conversation, Sambruna discussed how AI is making an impact at a government agency long associated with cutting-edge technology. This interview has been edited and condensed for clarity. NASA is a name that captures the public’s imagination. How is your team using AI? Where is it showing up and how are you thinking about it? “NASA has been using AI for a long time, but the people who were using it were mainly in information technology. It’s only recently, with the creation of the Chief AI Office and executive orders [stating] that agencies should use AI that it has surged into the consciousness of main-level employees. “At Goddard Space Flight Center, I supervise a workforce of 400 people, mostly contractors but also civil servants. People are using AI for their research, especially for coding. We have an internal version of ChatGPT, called ChatGSFC, that runs various models. One of the most important is Claude, which is very useful for coding. A lot of my team is using ChatGSFC to improve, check and develop coding. We also use Copilot Premium. “My goal is for the team to start considering any form of AI more as an assistant in everyday activities rather than just a code resolver. That is my overall ambition for the division.” Are there any ways in which you are seeing folks on your team use ChatGSFC or other AI tools in that assistive manner right now? “Some people have been using [ChatGSFC] to summarize long articles and to extract the highlights of those articles. People have also used AI to take notes, mostly in Teams. At the same time, I see people who are not entirely convinced that AI is a good thing. I was told by one senior civil servant that they will never use AI because they ‘are not giving up use of their brain.’ To me, this is not understanding what AI is for. “For my part, I’m trying to develop an AI tool to give more time back to people. For space missions we spend a lot of time in meetings and sometimes these meetings just reinforce the status quo. My AI project, which was also my capstone project for the AI Government Leadership Program, puts together automatized status reports from notes on Slack, Teams, SharePoint or other online devices and compiles a report for leadership, sending it automatically with a weekly cadence.” That’s fascinating! When it comes to developing ideas like this one, what factors are considered? Are there any governance processes that ensure new tools like this one are used in a high-quality way? “This is definitely at the forefront of our thoughts to generate trust from the users of AI. We are scientists, and we want to know what is going on. We don’t accept ‘black boxes.’ The trick for my organization is to make it as transparent as possible and explain exactly how the process works with the AI tool. This includes the tool’s inputs and outputs, the kind of data it has been trained on and the quality of that data. “One other aspect of generating trust is that we do not release tools in their full capacity and say ‘go.’ We are doing pilot cases where we single out a couple of teams to test the tool, keeping humans in the loop to generate a sense of ownership over that tool. “I am also engaged at the agency level in a group led by the CAIO Krista Kinnard. I am co-leading with Krista a workforce [initiative] focused on responsible AI use in science. If we are going to use AI, we also need to demonstrate that it’s being used when it’s necessary and that it’s used ethically and with attention to possible biases and exclusions.” Are there best practices or lessons learned from your team’s use of AI that you think could be applicable to other agencies? “I think it’s very important that agencies and anybody using AI realize that AI requires an interdisciplinary team. It’s not a good idea to make decisions using just IT experts. AI is going to change everything, and it impacts many aspects of the agency with every decision. “If you want to implement the use of AI to do a certain operation, you need to have legal people in the room to explain ethical boundaries. You need procurement people in the room to tell you what the resources are and what you can afford. You need to have scientists who are thinking about scientific applications of the tool and software developers who are thinking about the practical way to put the tool together. The team completing new AI installations must be inclusive and listen to many voices before making the final decision.” Thank you so much for sharing your thoughts with us, Rita! Continuing the conversation The AI Center for Government champions AI innovators across all levels of government. If your agency is taking steps to lead AI well, we’d love to hear from you. Join us as we highlight real-world AI use cases and convene public sector leaders from across the country to share tools and insights to lead confidently in the age of AI. We’re here to help! Sign up for our newsletter. Follow us on LinkedIn. Get in touch! Email us at [email protected]. Featured April 16, 2026 AI Bright Spot: Human-centered AI strategy in Kansas City Back to blog