Due to potential security risks, the US Space Force has temporarily barred its personnel from using generative artificial intelligence and large language models on the job. Bloomberg reports, citing an internal memo from the department.
The memorandum for Guardian Workforce does not permit the use of government data in AI-based web tools without special approval. In a comment to the publication, representatives of the US Space Force confirmed the information.
Lisa Costa, head of the Space Force’s Technology and Innovation Directorate, noted that neural networks “will undoubtedly revolutionise workflows and accelerate task execution by personnel.”
However, she expressed concerns about cybersecurity requirements and data handling, calling for a “responsible” rollout of artificial intelligence.
According to Nicolas Chaillan, founder of the AI project Ask Sage, the restrictions affected at least 500 Space Force personnel who used the platform.
He criticised the department’s decision and added that Ask Sage tools are used by more than 10,000 personnel in other parts of the U.S. Department of Defense, including 6,500 people in the Air Force.
Chaillan resigned as the Pentagon’s first Chief Software Officer in 2021, accusing the Pentagon of slow AI adoption and saying that the United States lagged behind China.
The head of Ask Sage also said that some department employees are forced to pay for software out of their own pockets, as AI tools ease the task of writing reports.
In August, the Department of Defense announced the formation of the Lima operational group to study and develop generative artificial intelligence. The developed product will be integrated into the U.S. security apparatus.
Earlier, U.S. authorities issued a declaration on the use of artificial intelligence in the armed forces, including “human accountability”.
