<![CDATA[C4ISRNet]]>https://www.c4isrnet.comThu, 22 Jun 2023 15:23:06 +0000en1hourly1<![CDATA[Biden hosts forum on artificial intelligence with tech leaders]]>https://www.c4isrnet.com/federal-oversight/2023/06/20/biden-hosts-forum-on-artificial-intelligence-with-tech-leaders/https://www.c4isrnet.com/federal-oversight/2023/06/20/biden-hosts-forum-on-artificial-intelligence-with-tech-leaders/Tue, 20 Jun 2023 12:16:14 +0000President Joe Biden will convene a group of technology leaders on Tuesday to debate artificial intelligence.

The Biden administration is seeking to figure out how to regulate the emergent field of AI, looking for ways to nurture its potential for economic growth and national security and protect against its potential dangers. Biden plans to meet with eight experts from academia and advocacy groups.

US regulators take aim at AI to protect consumers and workers

The sudden emergence of AI chatbot ChatGPT and other tools has jumpstarted investment in the sector. AI tools are able to craft human-like text, music, images and computer code. This form of automation could increase the productivity of workers, but experts warn of numerous harms. The technology could be used to replace workers, causing layoffs. It’s already being deployed in false images and videos, becoming a vehicle of disinformation that could undermine democratic elections.

In May, Biden’s administration brought together tech CEOs at the White House to discuss these issues, with the Democratic president telling them, “What you’re doing has enormous potential and enormous danger.”

White House chief of staff Jeff Zients’ office is developing a set of actions the federal government can take over the coming weeks regarding AI, according to the White House. Top officials are meeting two to three times each week on this issue, in addition to the daily work of federal agencies. The administration wants commitments from private companies to address the possible risks from AI.

Biden is meeting on Tuesday at the Fairmont hotel in San Francisco with Tristan Harris, executive director of the Center for Human Technology; Jim Steyer, the CEO of Common Sense Media; and Joy Buolamwin, founder of the Algorithmic Justice League, among others.

He’s also in the San Francisco area to raise money for this 2024 reelection campaign. He plans to hold two fundraising events on Tuesday, after holding two on Monday. One of Biden’s Monday fundraisers was hosted by Kevin Scott, the chief technology officer and executive vice president for AI at Microsoft.

]]>
Susan Walsh
<![CDATA[US regulators take aim at AI to protect consumers and workers]]>https://www.c4isrnet.com/federal-oversight/watchdogs/2023/06/15/us-regulators-take-aim-at-ai-to-protect-consumers-and-workers/https://www.c4isrnet.com/federal-oversight/watchdogs/2023/06/15/us-regulators-take-aim-at-ai-to-protect-consumers-and-workers/Thu, 15 Jun 2023 13:40:35 +0000As concerns grow over increasingly powerful artificial intelligence systems like ChatGPT, the nation’s financial watchdog says it’s working to ensure that companies follow the law when they’re using AI.

Already, automated systems and algorithms help determine credit ratings, loan terms, bank account fees, and other aspects of our financial lives. AI also affects hiring, housing and working conditions.

Ben Winters, Senior Counsel for the Electronic Privacy Information Center, said a joint statement on enforcement released by federal agencies last month was a positive first step.

“There’s this narrative that AI is entirely unregulated, which is not really true,” he said. “They’re saying, ‘Just because you use AI to make a decision, that doesn’t mean you’re exempt from responsibility regarding the impacts of that decision. This is our opinion on this. We’re watching.’”

In the past year, the Consumer Finance Protection Bureau said it has fined banks over mismanaged automated systems that resulted in wrongful home foreclosures, car repossessions, and lost benefit payments, after the institutions relied on new technology and faulty algorithms.

There will be no “AI exemptions” to consumer protection, regulators say, pointing to these enforcement actions as examples.

Consumer Finance Protection Bureau Director Rohit Chopra said the agency has “already started some work to continue to muscle up internally when it comes to bringing on board data scientists, technologists and others to make sure we can confront these challenges” and that the agency is continuing to identify potentially illegal activity.

Representatives from the Federal Trade Commission, the Equal Employment Opportunity Commission, and the and identify negative ways it could affect consumers’ lives.

“One of the things we’re trying to make crystal clear is that if companies don’t even understand how their AI is making decisions, they can’t really use it,” Chopra said. “In other cases, we’re looking at how our fair lending laws are being adhered to when it comes to the use of all of this data.”

Under the Fair Credit Reporting Act and Equal Credit Opportunity Act, for example, financial providers have a legal obligation to explain any adverse credit decision. Those regulations likewise apply to decisions made about housing and employment. Where AI make decisions in ways that are too opaque to explain, regulators say the algorithms shouldn’t be used.

“I think there was a sense that, ‘Oh, let’s just give it to the robots and there will be no more discrimination,’” Chopra said. “I think the learning is that that actually isn’t true at all. In some ways the bias is built into the data.”

EEOC Chair Charlotte Burrows said there will be enforcement against AI hiring technology that screens out job applicants with disabilities, for example, as well as so-called “bossware” that illegally surveils workers.

Burrows also described ways that algorithms might dictate how and when employees can work in ways that would violate existing law.

“If you need a break because you have a disability or perhaps you’re pregnant, you need a break,” she said. “The algorithm doesn’t necessarily take into account that accommodation. Those are things that we are looking closely at ... I want to be clear that while we recognize that the technology is evolving, the underlying message here is the laws still apply and we do have tools to enforce.”

OpenAI’s top lawyer, at a conference this month, suggested an industry-led approach to regulation.

“I think it first starts with trying to get to some kind of standards,” Jason Kwon, OpenAI’s general counsel, told a tech summit in Washington, D,C., hosted by software industry group BSA. “Those could start with industry standards and some sort of coalescing around that. And decisions about whether or not to make those compulsory, and also then what’s the process for updating them, those things are probably fertile ground for more conversation.”

Sam Altman, the head of OpenAI, which makes ChatGPT, said government intervention “will be critical to mitigate the risks of increasingly powerful” AI systems, suggesting the formation of a U.S. or global agency to license and regulate the technology.

While there’s no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, societal concerns brought Altman and other tech CEOs to the White House this month to answer hard questions about the implications of these tools.

Winters, of the Electronic Privacy Information Center, said the agencies could do more to study and publish information on the relevant AI markets, how the industry is working, who the biggest players are, and how the information collected is being used — the way regulators have done in the past with new consumer finance products and technologies.

“The CFPB did a pretty good job on this with the ‘Buy Now, Pay Later’ companies,” he said. “There are so may parts of the AI ecosystem that are still so unknown. Publishing that information would go a long way.”

]]>
Evan Vucci
<![CDATA[What war elephants can teach us about the future of AI in combat]]>https://www.c4isrnet.com/opinion/2023/06/14/what-war-elephants-can-teach-us-about-the-future-of-ai-in-combat/https://www.c4isrnet.com/opinion/2023/06/14/what-war-elephants-can-teach-us-about-the-future-of-ai-in-combat/Wed, 14 Jun 2023 18:27:46 +0000The use of artificial intelligence in combat poses a thorny ethical dilemma for Pentagon leaders. The conventional wisdom is that they must choose between two equally bad alternatives: either enforce full human supervision of the AI systems at the cost of speed and accuracy or allow AI to operate with no supervision at all.

In the first option, our military builds and deploys “human in the loop” AI systems. These systems adhere to ethical standards and the laws of war but are limited by the abilities of the human beings that supervise them. It is widely believed that such systems are doomed to be slower than any unsupervised, “unethical” systems used by our adversaries. The unethical autonomous systems appear to boast a competitive edge that, left unchallenged, has the potential to erode Western strategic advantage.

The second option is to completely sacrifice human oversight for machine speed, which could lead to unethical and undesirable behavior of AI systems on the battlefield.

Realizing that neither of these options is sufficient, we need to embrace a new approach. Much like the emergence of the cyber warrior in the realm of cybersecurity, the realm of AI requires a new role – that of the “AI operator.”

With this approach, the objective is to establish a synergistic relationship between military personnel and AI without compromising the ethical principles that underpin our national identity.

We need to strike a balance between maintaining the human oversight that informs our ethical framework and adopting the agility and response time of automated systems. To achieve this, we must foster a higher level of human interaction with AI models than simply stop/go. We can navigate this complex duality by embedding the innate human advantages of diversity, contextualization, and social interaction into the governance and behavior of intelligent combat systems.

What we can learn from ancient war elephants

Remarkably, a historical precedent exists that parallels the current challenge we face in integrating AI and human decision-making. For thousands of years, “war elephants” were used in combat and logistics across Asia, North Africa, and Europe. These highly intelligent creatures required specialized training and a dedicated operator, or “mahout”, to ensure the animals would remain under control during battles.

War elephants and their mahouts provide a potent example of a complementary relationship. Much like we seek to direct the speed and accuracy of AI on the battlefield, humans were once tasked with directing the power and prowess of war elephants -- directing their actions and minimizing the risk of unpredictable behavior.

Taking inspiration from the historical relationship between humans and war elephants, we can develop a similar balanced partnership between military personnel and AI. By enabling AI to complement, rather than replace, human input, we can preserve the ethical considerations central to our core national values while still benefiting from the technological advancements that autonomous systems offer.

Operators as masters of AI

The introduction and integration of AI on the battlefield presents a unique challenge, as many military personnel do not possess intimate knowledge of the development process behind AI models. These systems are often correct, and as a result, users tend rely too heavily on their capabilities, oblivious to errors when they occur. This phenomenon is referred to as the “automation conundrum” – the better a system is, the more likely the user is to trust it when it is wrong, even obviously so.

To bridge the gap between military users and the AIs upon which they depend, there needs to be a modern mahout, or AI operator. This specialized new role would emulate the mahouts who raised war elephants: overseeing their training, nurturing, and eventual deployment on the battlefield. By fostering an intimate bond with these intelligent creatures, mahouts gained invaluable insight into the behavior and limitations of their elephants, leveraging this knowledge to ensure tactical success and long-term cooperation.

AI operators would take on the responsibilities of mahouts for AI systems, guiding their development, training, and testing to optimize combat advantages while upholding the highest ethical standards. By possessing a deep understanding of the AI for which they would be responsible, these operators serve as liaisons between advanced technology and the warfighters that depend on them.

Diverse trainers, models can overcome risk of system bias

Just as war elephants and humans possess their own strengths, weaknesses, biases, and specialized abilities, so do AI models. Yet, due to the cost of building and training AI models from scratch, the national security community has often opted for tweaking and customizing existing “foundation” models to accommodate new use cases. While this approach may seem logical on the surface, it amplifies risk by building upon models with exploitable data, gaps, and biases.

This approach envisions the creation of AI models by different teams, each utilizing unique data sets and diverse training environments. Such a shift would not only distribute the risk of ethical gaps associated with individual models but also provide AI operators with a broader array of options, tailored to meet changing mission needs. By adopting this more nuanced approach, AI operators can ensure AI’s ethical and strategic application in warfare, ultimately strengthening national security and reducing risk.

Mahouts who trained their war elephants did not do so with the intention of sending these magnificent creatures into battle alone. Rather, they cultivated a deep symbiotic relationship, enhancing the collective strengths of both humans and animals through cooperation and leading to greater overall outcomes. Today’s AI operators can learn from this historical precedent, striving to create a similar partnership between humans and AI in the context of modern warfare.

By nurturing the synergy between human operators and AI systems, we can transform our commitment to ethical values from a perceived limitation into a strategic advantage. This approach embraces the fundamental unpredictability and confusion of the battlefield by leveraging the combined strength of human judgment and AI capabilities. Furthermore, the potential for this collaborative method extends beyond the battlefield, hinting at additional applications where ethical considerations and adaptability are essential.|

Eric Velte is Chief Technology Officer, ASRC Federal, the government services subsidiary of Arctic Slope Regional Corp., and Aaron Dant is Chief Data Scientist, ASRC Federal Mission Solutions.

Have an opinion?

This article is an letter to the editor and the opinions expressed are those of the author. If you would like to respond, or have a letter or editorial of your own you would like to submit, please email C4ISRNET and Federal Times Senior Managing Editor Cary O’Reilly.

]]>
BERTOLINI LAURA
<![CDATA[US Army extends contract with BigBear.ai for automated info]]>https://www.c4isrnet.com/industry/2023/06/12/us-army-extends-contract-with-bigbearai-for-automated-info/https://www.c4isrnet.com/industry/2023/06/12/us-army-extends-contract-with-bigbearai-for-automated-info/Mon, 12 Jun 2023 13:33:17 +0000WASHINGTON — The U.S. Army extended its contract with artificial intelligence and analytics company BigBear.ai as it constructs the Global Force Information Management system, which will provide service leaders an automated and holistic view of manpower, equipment, training and overall readiness.

The six-month extension for GFIM, as it’s known, is valued at $8.5 million. It builds upon a nine-month, $14.8 million deal announced in late 2022 as well as prototype work the year prior.

The management system is meant to consolidate more than a dozen aging applications. It will also automate a raft of tasks that were once done manually, such as determining unit status.

“GFIM is a game-changing capability that holds immense importance for the U.S. Army and has the potential to revolutionize processes by enabling data-driven decision-making, automation of critical functions, and real-time visibility,” Ryan Legge, BigBear.ai’s president of integrated defense solutions, said in a statement June 12.

Senators plan briefings on AI to learn more about risks

During the extension, BigBear.ai is expected to migrate GFIM to the cARMY cloud. Modernization of the Army’s networks and underlying computer infrastructure is among the service’s priorities; Army Secretary Christine Wormuth has said achieving digital fluency and data centricity is her No. 2 objective.

The extension for BigBear.ai comes on the heels of user testing in late May and early June. BigBear.ai staff, Army leaders and technical experts associated with GFIM attended. Army GFIM Capabilities Management Officer Lori Mongold in a statement Monday described the system as a “transformative leap forward in force management capabilities,” once fully fleshed out.

BigBear.ai last month unveiled a partnership with L3Harris Technologies, the 10th largest defense contractor by revenue, according to Defense News analysis.

As part of that agreement, BigBear.ai will supply L3Harris with its computer vision, predictive analytics and related applications in a bid to improve manned-unmanned teaming and identification and classification of foreign vessels for the Navy.

]]>
Chris McGrath
<![CDATA[Senators plan briefings on AI to learn more about risks]]>https://www.c4isrnet.com/federal-oversight/congress/2023/06/07/senators-plan-briefings-on-ai-to-learn-more-about-risks/https://www.c4isrnet.com/federal-oversight/congress/2023/06/07/senators-plan-briefings-on-ai-to-learn-more-about-risks/Wed, 07 Jun 2023 16:30:08 +0000WASHINGTON — Democrats and Republicans can agree on at least one thing: There is insufficient understanding of artificial intelligence and machine learning in Congress.

So senators are organizing educational briefings. Majority Leader Chuck Schumer, a New York Democrat, on June 6 announced three such planned get-togethers this summer, including a classified session dedicated to AI employment by the U.S. Department of Defense and the intelligence community, as well as AI developments among “our adversaries” such as China and Russia.

“The Senate must deepen our expertise in this pressing topic,” reads the announcement, backed by fellow Democratic Sen. Martin Heinrich of New Mexico and Republican Sens. Mike Rounds of South Dakota and Todd Young of Indiana.

“AI is already changing our world,” it continues, “and experts have repeatedly told us that it will have a profound impact on everything from our national security to our classrooms to our workforce, including potentially significant job displacement.”

Q&A: Maxar execs discuss US Army simulation, Project Maven

The Defense Department is pouring billions of dollars into AI advancement and adoption, including a proposed $1.8 billion for fiscal 2024 alone. China and Russia, considered top national security threats, have significantly invested in AI for military applications, as well.

U.S. officials consider AI an invaluable tool to improve performance on the battlefield and in the boardroom. With it, they say, tides of information can be parsed more effectively, digital networks can be monitored around the clock, targeting aboard combat vehicles can be enhanced, maintenance needs can be identified before things fall apart and decisions can be made quicker than ever before.

But a lack of general understanding — what, for example, is the difference between AI, ML, autonomy, bots, large language models and more — can hinder policymaking on the Hill, spending decisions from there and deployment further downstream.

“I’ve been doing this stuff long enough that I feel like, to a certain degree, AI has become the buzzword of what cyber was maybe 15 or 20 years ago, where everybody on a government program says, ‘I’m going to add this buzz term and see if I can get a little more money on whatever program I have,’” Sen. Mark Warner, a Virginia Democrat at the head of the intelligence committee, said Tuesday at the Scale Gov AI Summit, blocks from the White House.

“What we’re all trying to work on,” he added, “is how do we get ourselves educated as quickly as possible.”

Public attention paid to AI and its offshoots skyrocketed following the November rollout of OpenAI’s ChatGPT, which is capable of carrying a convincing conversation or crafting computer code with little prompting. OpenAI CEO Sam Altman testified before the Senate in May. He previously expressed worries about AI being used for misinformation campaigns or cyberattacks.

U.S. Sen. Mike Rounds, a South Dakota Republican, listens to a question June 6, 2023, at the Scale Gov AI Summit just blocks from the White House in Washington, D.C. (Colin Demarest/C4ISRNET)

The study of AI and its consequences by lawmakers must remain fluid and free from political jockeying, according to Rounds, who sits on the armed services committee.

“We’re trying to combine, and to put together a process, where members of the U.S. Senate can actually come to a common understanding of just exactly what we mean when we talk about ‘machine learning’ or ‘AI,’ what it really is in terms of its current status, what it looks like today,” he said at the summit where Warner spoke.

“And we’re trying to do this as a group in a bipartisan basis,” he said, “so that folks can bring in their ideas and can be a host for other parts of the industry to come to different members and say, ‘These are the concerns we’ve got.’”

]]>
Colin Demarest
<![CDATA[New robotics job field may be coming to the Marine Corps]]>https://www.c4isrnet.com/news/your-marine-corps/2023/06/05/new-robotics-job-field-may-be-coming-to-the-marine-corps/https://www.c4isrnet.com/news/your-marine-corps/2023/06/05/new-robotics-job-field-may-be-coming-to-the-marine-corps/Mon, 05 Jun 2023 16:30:48 +0000The Marine Corps will consider establishing a new job field dedicated to robotics as it doubles down on that technology as part of a revamp of the force.

Intelligent robotics and autonomous systems could allow Marines to operate faster, more cheaply and at lower risk than before, states a document published Monday with updates to Force Design 2030, the Corps’ ambitious restructuring plan.

Marine leaders say recent conflicts ― particularly those between Ukraine and Russia, and Armenia and Azerbaijan ― have confirmed the need for the Corps to get better at employing autonomous systems.

“We clearly recognize and acknowledge the importance of intelligent robotic and autonomous systems,” Lt. Gen. Karsten Heckl, deputy commandant for Combat Development and Integration, said at a media roundtable Friday. “I feel like we’re in front of it right now. And we’ve got to stay there.”

How the Marines will use uncrewed tech, according to acquisitions boss

But Marine leaders aren’t yet sure how they will find or train people with the knowledge to operate those systems.

“Finding the structure, finding the right people and then getting them properly trained is a whole nother set of challenges,” Heckl said.

One thing Heckl said he does know: Robotics work won’t be relegated to a collateral duty or a secondary military occupational specialty.

At the roundtable Friday, Marine generals stressed that technology won’t replace human beings.

In the case of uncrewed aircraft that collect massive amounts of data, “you have to have the ability to do with that data what needs to be done so that humans who are the ultimate decision-makers have the ability to make the correct decision,” said Brig. Gen. Stephen Lightfoot, director of the Corps’ Capabilities Development Directorate.

By September, the Corps will incorporate robotics concepts and applications into its training and education centers, according to the Force Design update.

In the following year, leadership “will develop a strategy to recruit and retain personnel with IRAS knowledge” and “to integrate robotics specialties throughout the total force,” the update states.

That could mean forming an occupational field dedicated to the technology, according to the update.

But it has proven tough for the Marine Corps, as for the other services, to recruit and retain troops who possess the valuable technical knowledge that could translate to higher salaries in the civilian sector.

The Corps is trying out a variety of strategies to fill its tech gap. It is offering bonuses, making use of expertise reservists developed at their civilian jobs, and letting some people with in-demand skills join or rejoin at a higher rank than they otherwise would ― a program called lateral entry.

Lateral entry is one option the Marine Corps is considering as a way to lure people with robotics expertise, according to Monday’s Force Design update.

The update also raises the possibility of holding robotics competitions as a recruiting tactic.

“A lot of this discussion is undefined,” Heckl said. “What we do realize is the significance of this. There’s a lot of folks … that say this is the 21st-century equivalent of the machine gun. So this is a big deal.”

]]>
<![CDATA[Air Force official’s musings on rogue drone targeting humans go viral]]>https://www.c4isrnet.com/unmanned/uas/2023/06/02/air-force-officials-musings-on-rogue-drone-targeting-humans-go-viral/https://www.c4isrnet.com/unmanned/uas/2023/06/02/air-force-officials-musings-on-rogue-drone-targeting-humans-go-viral/Fri, 02 Jun 2023 15:41:42 +0000WASHINGTON — The U.S. Air Force walked back comments reportedly made by a colonel regarding a simulation in which a drone outwitted its artificial intelligence training and killed its handler, after the claims went viral on social media.

Air Force spokesperson Ann Stefanek said in a June 2 statement no such testing took place, adding that the service member’s comments were likely “taken out of context and were meant to be anecdotal.”

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “This was a hypothetical thought experiment, not a simulation.”

The killer-drone-gone-rogue episode was initially attributed to Col. Tucker “Cinco” Hamilton, the chief of AI testing and operations, in a recap from the Royal Aeronautical Society’s FCAS23 Summit in May. The summary was later updated to include additional comments from Hamilton, who said he misspoke at the conference.

How autonomous wingmen will help fighter pilots in the next war

“We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” Hamilton was quoted as saying in the Royal Aeronautical Society’s update. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.”

Hamilton’s assessment of the plausibility of rogue-drone scenarios, however theoretical, coincides with stark warnings in recent days by leading tech executives and engineers, who wrote in an open letter that the technology has the potential to wipe out humanity if left unchecked.

Hamilton is also commander of the 96th Operations Group at Eglin Air Force Base in Florida, which falls under the purview of the 96th Test Wing. Defense News on Thursday reached out to the test wing to speak to Hamilton, but was told he was unavailable for comment.

In the original post, the Royal Aeronautical Society said Hamilton described a simulation in which a drone fueled by AI was given a mission to find and destroy enemy air defenses. A human was supposed to give the drone its final authorization to strike or not, Hamilton reportedly said.

But the drone algorithms were told that destroying the surface-to-air missile site was its preferred option. So the AI decided that the human controller’s instructions not to strike were getting in the way of its mission, and then attacked the operator and the infrastructure used to relay instructions.

“It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton was quoted as saying. “We trained the system, ‘Hey don’t kill the operator, that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

The Defense Department has for years embraced AI as a breakthrough technology advantage for the U.S. military, investing billions of dollars and creating the the Chief Digital and Artificial Intelligence Office in late 2021, now led by Craig Martell.

The Pentagon is seen from Air Force One as it flies overhead on March 2, 2022. (Patrick Semansky/AP)

More than 685 AI-related projects are underway at the department, including several tied to major weapon systems, according to the Government Accountability Office, a federal auditor of agencies and programs. The Pentagon’s fiscal 2024 budget blueprint includes $1.8 billion for artificial intelligence.

The Air and Space forces are responsible for at least 80 AI endeavors, according to the GAO. Air Force Chief Information Officer Lauren Knausenberger has advocated for greater automation in order to remain dominant in a world where militaries make speedy decisions and increasingly employ advanced computing.

The service is ramping up efforts to field autonomous or semiautonomous drones, which it refers to as collaborative combat aircraft, to fly alongside F-35 jets and a future fighter it calls Next Generation Air Dominance.

The service envisions a fleet of those drone wingmen that would accompany crewed aircraft into combat and carry out a variety of missions. Some collaborative combat aircraft would conduct reconnaissance missions and gather intelligence, others could strike targets with their own missiles, and others could jam enemy signals or serve as decoys to lure enemy fire away from the fighters with human pilots inside.

The Air Force’s proposed budget for FY24 includes new spending to help it prepare for a future with drone wingmen, including a program called Project Venom to help the service experiment with its autonomous flying software in F-16 fighters.

Under Project Venom, which stands for Viper Experimentation and Next-gen Operations Model, the Air Force will load autonomous code into six F-16s. Human pilots will take off in those F-16s and fly them to the testing area, at which point the software will take over and conduct the flying experiments.

US Army may ask defense industry to disclose AI algorithms

The Royal Aeronautical Society’s post on the summit said Hamilton “is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight.”

The Air Force plans to spend roughly $120 million on Project Venom over the next five years, including a nearly $50 million budget request for FY24 to kick off the program. The Air Force told Defense News in March it hadn’t decided which base and organization will host Project Venom, but the budget request asked for 118 staff positions to support the program at Eglin Air Force Base.

In early 2022, as public discussions about the Air Force’s plans for autonomous drone wingmen gathered steam, former Air Force Secretary Deborah Lee James told Defense News that the service must be cautious and consider ethical questions as it moves toward conducting warfare with autonomous systems.

James said that while the AI systems in such drones would be designed to learn and act on their own, such as taking evasive maneuvers if it were in danger, she doubted the Air Force would allow an autonomous system to shift from one target to another on its own if that would result in human deaths.

]]>
Air Force Research Lab
<![CDATA[Multiple companies could win work on US Army’s Project Linchpin AI]]>https://www.c4isrnet.com/industry/2023/06/01/multiple-companies-could-win-work-on-us-armys-project-linchpin-ai/https://www.c4isrnet.com/industry/2023/06/01/multiple-companies-could-win-work-on-us-armys-project-linchpin-ai/Thu, 01 Jun 2023 15:11:30 +0000PHILADELPHIA — The U.S. Army will likely contract multiple companies to construct and operate its fledgling Project Linchpin, an artificial intelligence pipeline meant to feed the service’s intelligence-gathering and electronic warfare systems.

An initial contract for the digital conduit is expected to be inked in March or April 2024, according to Col. Chris Anderson, a project manager at the Army’s Program Executive Office Intelligence, Electronic Warfare and Sensors, or PEO IEW&S. More contracts should follow.

“I envision it’s going to end up being a series of contracts for various aspects of the pipeline,” Anderson told C4ISRNET on the sidelines of Technical Exchange Meeting X, a defense industry conference held last week in Philadelphia. “Building a team of teams, both within the government, with industry, academia and everybody else — it’s going to take a village to make this happen.”

The Pentagon has for years recognized the value of AI, both on and off the battlefield, and has subsequently invested billions of dollars to advance and adopt the capability. The technology can help vehicles navigate, predict when maintenance is required, assist with the identification and classification of targets, and aid analysts poring over mountains of information.

Through Project Linchpin, the Army intends to deliver AI capabilities across the closely related intel, cyber and electronic warfare worlds, documents show, while also addressing hang-ups associated with the field, such as the consumption and incorporation of real-world data.

Lockheed paces JADC2 information-sharing at Northern Edge

“There’s a data-labelling component, the actual model training that happens,” Anderson said. “Then there’s verification and validation on the back end, and then just, kind of, running the infrastructure. So that’s four or five different focus areas that will probably require different industry partners.”

Included in the PEO IEW&S portfolio are the Tactical Intelligence Targeting Access Node meant to centralize and automate the collection, parsing and distribution of data; the High Accuracy Detection and Exploitation System, an intelligence, surveillance and reconnaissance jet outfitted with advanced sensors; and the Terrestrial Layer Systems designed to provide soldiers with cyber and electronic warfare assistance.

Each will play a specific role on the battlefield of the future, and each will tie back to Project Linchpin.

“Any sensor-related program within the PEO, this will be their machine-learning pipeline,” Anderson said. “We want to take it from a science fair experiment into a program of record.”

An industry day for Project Linchpin is planned for August or September. PEO IEW&S conducted market research at a previous technical exchange meeting in Nashville, Tennessee.

]]>
Kin Cheung
<![CDATA[US Army may ask defense industry to disclose AI algorithms ]]>https://www.c4isrnet.com/artificial-intelligence/2023/05/31/us-army-may-ask-defense-industry-to-disclose-ai-algorithms/https://www.c4isrnet.com/artificial-intelligence/2023/05/31/us-army-may-ask-defense-industry-to-disclose-ai-algorithms/Wed, 31 May 2023 17:02:44 +0000PHILADELPHIA — U.S. Army officials are considering asking companies to give them an inside look at the artificial intelligence algorithms they use to better understand their provenance and potential cybersecurity weak spots.

The nascent AI “bill of materials” effort would be similar to existing software bill of materials practices, or SBOMs, the comprehensive lists of ingredients and dependencies that make up software, according to Young Bang, the principal deputy assistant secretary of the Army for acquisition, logistics and technology.

Such disclosures are championed by the National Telecommunications and Information Administration, Cybersecurity and Infrastructure Security Agency and other organizations.

“We’re toying with the notion of an AI BOM. And that’s because, really, we’re looking at things from a risk perspective,” Bang told reporters on the sidelines of Technical Exchange Meeting X, a defense industry conference held May 24-25 in Philadelphia. “Just like we’re securing our supply chain — semiconductors, components, subcomponents — we’re also thinking about that from a digital perspective. So we’re looking at software, data and AI.”

Young Bang, the principal deputy assistant secretary of the Army for acquisition, logistics and technology, speaks May 25, 2023, in Philadelphia at the service's Technical Exchange Meeting X. (Colin Demarest/C4ISRNET)

Bang and others met with AI companies during the conference to gather feedback on the potential requirements. He did not share insights from the private get-together.

The Pentagon is investing in AI, machine learning and autonomy as leaders demand quicker decision-making, longer and more-remote intelligence collection and a reduction of human risk on increasingly high-tech battlefields. The Defense Department in 2021 established its Chief Digital and AI Office, whose executives have since said high-quality data is foundational to all its pursuits.

More than 685 AI-related projects are underway at the department, according to the Government Accountability Office, a federal watchdog, with at least 232 being handled by the Army. A peek under the algorithm hood, Bang said, is more about ruling out “risk like Trojans, triggers, poison data sets, or prompting of unintentional outcomes,” and less about reverse engineering and exposing sensitive intellectual property.

“I just want to make sure we’re explicit about this: It’s not to get at vendor IP. It’s really about, how do we manage the cyber risks and the vulnerabilities?” he said. “We’re thinking about how do we work with industry.”

]]>
Sgt. Eric Garland
<![CDATA[‘Adversarial AI’ a threat to military systems, Shift5′s Lospinoso says]]>https://www.c4isrnet.com/artificial-intelligence/2023/05/29/adversarial-ai-a-threat-to-military-systems-shift5s-lospinoso-says/https://www.c4isrnet.com/artificial-intelligence/2023/05/29/adversarial-ai-a-threat-to-military-systems-shift5s-lospinoso-says/Mon, 29 May 2023 19:43:08 +0000Josh Lospinoso’s first cybersecurity startup was acquired in 2017 by Raytheon/Forcepoint. His second, Shift5, works with the U.S. military, rail operators and airlines including JetBlue. A 2009 West Point grad and Rhodes Scholar, the 36-year-old former Army captain spent more than a decade authoring hacking tools for the National Security Agency and U.S. Cyber Command.

Lospinoso recently told a Senate Armed Services subcommittee how artificial intelligence can help protect military operations. The CEO/programmer discussed the subject with The Associated Press as well how software vulnerabilities in weapons systems are a major threat to the U.S. military. The interview has been edited for clarity and length.

Q: In your testimony, you described two principal threats to AI-enabled technologies: One is theft. That’s self-explanatory. The other is data poisoning. Can you explain that?

A: One way to think about data poisoning is as digital disinformation. If adversaries are able to craft the data that AI-enabled technologies see, they can profoundly impact how that technology operates.

Q: Is data poisoning happening?

A: We are not seeing it broadly. But it has occurred. One of the best-known cases happened in 2016. Microsoft released a Twitter chatbot it named Tay that learned from conversations it had online. Malicious users conspired to tweet abusive, offensive language at it. Tay began to generate inflammatory content. Microsoft took it offline.

Q: AI isn’t just chatbots. It has long been integral to cybersecurity, right?

A: AI is used in email filters to try to flag and segregate junk mail and phishing lures. Another example is endpoints, like the antivirus program on your laptop – or malware detection software that runs on networks. Of course, offensive hackers also use AI to try defeat those classification systems. That’s called adversarial AI.

Q: Let’s talk about military software systems. An alarming 2018 Government Accountability Office report said nearly all newly developed weapons systems had mission critical vulnerabilities. And the Pentagon is thinking about putting AI into such systems?

A: There are two issues here. First, we need to adequately secure existing weapons systems. This is a technical debt we have that is going to take a very long time to pay. Then there is a new frontier of securing AI algorithms – novel things that we would install. The GAO report didn’t really talk about AI. So forget AI for a second. If these systems just stayed the way that they are, they’re still profoundly vulnerable.

We are discussing pushing the envelope and adding AI-enabled capabilities for things like improved maintenance and operational intelligence. All great. But we’re building on top of a house of cards. Many systems are decades old, retrofitted with digital technologies. Aircraft, ground vehicles, space assets, submarines. They’re now interconnected. We’re swapping data in and out. The systems are porous, hard to upgrade, and could be attacked. Once an attacker gains access, it’s game over.

Sometimes it’s easier to build a new platform than to redesign existing systems’ digital components. But there is a role for AI in securing these systems. AI can be used to defend if someone tries to compromise them.

Q: You testified that pausing AI research, as some have urged, would be a bad idea because it would favor China and other competitors. But you also have concerns about the headlong rush to AI products. Why?

A: I hate to sound fatalistic, but the so-called “burning-use” case seems to apply. A product rushed to market often catches fire (gets hacked, fails, does unintended damage). And we say, ‘Boy, we should have built in security.’ I expect the pace of AI development to accelerate, and we might not pause enough to do this in a secure and responsible way. At least the White House and Congress are discussing these issues.

Q: It seems like a bunch of companies – including in the defense sector — are rushing to announce half-baked AI products.

A: Every tech company and many non-tech companies have made almost a jarring pivot toward AI. Economic dislocations are coming. Business models are fundamentally going to change. Dislocations are already happening or are on the horizon — and business leaders are trying to not get caught flat-footed.

Q: What about the use of AI in military decision-making such as targeting?

A: I do not, categorically do not, think that artificial intelligence algorithms — the data that we’re collecting — are ready for prime time for a lethal weapon system to be making decisions. We are just so far from that.

]]>
<![CDATA[Swarms of AI-fueled drones, vehicles track targets in AUKUS tests]]>https://www.c4isrnet.com/unmanned/2023/05/26/swarms-of-ai-fueled-drones-vehicles-track-targets-in-aukus-tests/https://www.c4isrnet.com/unmanned/2023/05/26/swarms-of-ai-fueled-drones-vehicles-track-targets-in-aukus-tests/Fri, 26 May 2023 15:02:16 +0000WASHINGTON — A swarm of Australian, U.K. and U.S. artificial intelligence-enabled air and ground vehicles collaboratively detected and tracked targets during testing overseas.

The trials conducted by the AUKUS partners delivered several “world firsts,” including the live re-training and international exchange of AI models, according to the U.K. Ministry of Defence, which disclosed the news May 26, a month after testing.

More than 70 military and civilian defense personnel and industry players participated in the experiment, part of the AUKUS Advanced Capabilities Pillar, or Pillar 2, established to expedite the trilateral development of critical technologies, such as AI, quantum, cyber and hypersonics. Pillar 1 — more discussed — aims to help Australia acquire nuclear-powered submarines.

Abe Denmark, the U.S. senior adviser to the secretary of defense for AUKUS, in a statement said the April demonstration was “truly a shared effort.”

Small drones launched from ‘wherever’ excel in US Army experiment

Together, teams developed models, directed different nations’ uncrewed aerial vehicles and evaluated performance. The joint deployments in the field featured Blue Bear Ghost and Insitu CT220 drones; Challenger 2 main battle tanks and Warrior armored vehicles; Viking uncrewed ground vehicles; a commercial FV433 Abbot self-propelled artillery gun; and a former Eastern Bloc BMP OT-90, an infantry fighting vehicle.

“By pooling our expertise and resources through our AUKUS partnerships,” Denmark said, “we can ensure that our militaries are equipped with the latest and most effective tools to defend our nations and uphold the principles of freedom and democracy around the world.”

Australian, U.K. and U.S. leaders have described AI as critical to international competitiveness in many sectors, finance, health and defense among them. By sharing AI and its underpinnings, the U.K. Ministry of Defence said in its announcement, the friendly militaries can figure out interoperability now, and not later, as well as save time and money.

https://www.federaltimes.com/federal-oversight/2023/05/25/biden-seeks-legislation-to-invest-in-australia-uk-defense-industries/

The U.S. Department of Defense’s fiscal 2024 budget blueprint featured a $1.8 billion allocation for AI, with Deputy Defense Secretary Kathleen Hicks describing it as a “key technology” area. The department catalogued at least 685 ongoing AI projects as of early 2021, including several tied to major weapons systems.

]]>
<![CDATA[Pentagon won’t pause pursuit of AI, CIO Sherman says]]>https://www.c4isrnet.com/artificial-intelligence/2023/05/25/pentagon-wont-pause-pursuit-of-ai-cio-sherman-says/https://www.c4isrnet.com/artificial-intelligence/2023/05/25/pentagon-wont-pause-pursuit-of-ai-cio-sherman-says/Thu, 25 May 2023 14:42:52 +0000ST. LOUIS — The U.S. public and private sectors cannot afford to pause their pursuits of artificial intelligence, as some have called for, amid an international race for technological supremacy, according to Pentagon Chief Information Officer John Sherman.

Digital luminaries including Apple co-founder Steve Wozniak, Getty Images CEO Craig Peters and Twitter’s Elon Musk, alongside academics and others, signed an open letter in March advocating for powerful AI development to proceed “only once we are confident their effects” will be manageable and net positive.

But doing so, Sherman said May 24 at the GEOINT Symposium in St. Louis, risks ceding AI hegemony to China or Russia, powers the U.S. considers premier national security threats.

“I’ve said this in other venues, and I’m going to say it here today, that I know some have advocated for taking a knee for six months,” he said. “No. Not at the Department of Defense, not the intelligence community.”

Why the military moves faster than government on AI

Defense leaders see AI, autonomy and related technologies as critical to long-term competitiveness on the world stage. A Pentagon strategy for AI implementation describes breakthroughs as shaking up “the national security landscape,” with foreign governments “investing heavily” in ways “that threaten global security, peace and stability.”

At least 685 AI-related projects were underway at the Pentagon as of early 2021, according to the Government Accountability Office, a federal watchdog. They include several tied to major weapons systems. The tech will play a critical role in navigation and targeting aboard the Army’s future Optionally Manned Fighting Vehicle as well as the streamlining of logistics and maintenance needs across the broader military.

“One thing we take pride in — the United States, working with our allies — is to be responsible in how we apply AI and develop it. Not in ways that you see in China and Russia and elsewhere,” Sherman said. “We can do this, and create decision advantage for our warfighters, correctly with our democratic values.”

The Biden administration this week rolled out guidance for federally backed AI research. Months prior, the White House published a blueprint for an AI bill of rights, which laid out a road map for responsible AI application.

]]>
Colin Demarest
<![CDATA[Why the military moves faster than government on AI]]>https://www.c4isrnet.com/it-networks/2023/05/24/why-the-military-moves-faster-than-government-on-ai/https://www.c4isrnet.com/it-networks/2023/05/24/why-the-military-moves-faster-than-government-on-ai/Wed, 24 May 2023 20:49:27 +0000The White House announced federal efforts to better understand and harness artificial intelligence for government, though full embrace of the technology is still likely a ways away.

Agencies are often slow to adopt technology, even with frameworks in place, because it is costly, labor intensive and in some ways, intimidating. To find out why, Federal Times spoke with José-Marie Griffiths, a former member of the National Security Commission on Artificial Intelligence and chairman of the Workforce Subcommittee.

Griffiths, now the president of Dakota State University where she works to educate the future workforce on AI, helped shape tech reforms and recommendations to leverage AI for the U.S. national security and defense workforces.

This interview has been edited for clarity and length.

Federal Times: The White House has said it intends to capitalize on AI to contend against national security threats and improve the functioning and efficiency of government. Where are we in this?

José-Marie Griffiths: “The federal government, and probably at state levels to some extent, too, have been slower to move than other institutions. They’ve become large institutions, and it’s hard to move them. And the mechanisms for moving quickly and updating, it’s just very hard.

The military can move a bit faster because they truly have this sort of top-down command structure. They can insist on it, to some extent.

So how do you get pockets of innovators within government? You need to find them. You need to empower them to innovate as they can and to gradually diffuse their innovations out to others.”

FT: What kind of recommendations were you making with your peers on the National Security Commission on Artificial Intelligence and how could those be taken up quickly?

JMG: “We were not just developing recommendations, but embodying those recommendations in language that could be absorbed directly into pieces of legislation, into existing legislation or even into executive orders. We tried to make it easy for everyone to adopt the recommendations.

Our recommendation at that time was we aren’t moving fast enough. And we perceived [that] into the geopolitical structure, [with] things going on around the world that we can no longer ignore — particularly, let’s face it, China’s ambitions, which China has been very open about and very clear about. In a way, it’s as if we let that creep up on us.

Ultimately, education and communication are going to be key.”

FT: As agencies look to cyber strategies as their North star, what steps might be easier to tackle first? How can agencies move forward?

JMG: “I think ultimately, the real dilemma is we’ve got whole layers of technology that really haven’t been changed. I mean, they’re still dealing with a fair amount of obsolete technology.

So when you want to do something new, you’ve got this really vulnerable environment. So how much effort are you going to put into protecting the vulnerable, rather than move forward and just replace what you have?

Even when there’ve been recommendations to move forward, they haven’t always been implemented, and it gets bottled down with ‘who’s going to pay for it?’ Agencies work one year at a time, so there’s no continuity in that sense.

If there were a way of saying ‘we’re going to approve a five year plan,’ now we know you’re gonna get funding every year for the next five years, unless something drastically goes wrong or we need the money because of an emergency. We don’t have that long-term view in government.”

FT: And what about workforce implications? Will humans remain a part of AI? And if they are, how can federal agencies ensure they have the staff support to shepherd this technology in light of hiring challenges?

JMG: “In most applications, we haven’t truly taken the human out of the loop. So that’s a question as we go forward: what’s the role of the human?

We don’t have a large enough workforce in the computing and information sciences generally. We have 750,000 vacancies in the United States alone in cybersecurity positions. And our prediction was there would likely be eventually even more openings for people with artificial intelligence expertise than with cybersecurity expertise. And so we face this huge shortage of people.

The number of people going into the computing related sciences, it’s going up, but the increases are our international students. And if we want to talk about the diversity of that workforce, we are way back to [late 1990s] levels of women going into those fields. We’ve made a lot of progress over a period of years, and it looks as though we’ve gone all the way back. So we do face some critical issues.”

FT: So, we’re in a global competition for talent as well as a domestic one.

JMG: “We have to say homegrown talent isn’t going to be enough. And have to have an immigration system that recognizes [international talent]. We can start off with the the increase in foreign students who are getting degrees and perhaps give them longer work permits, get them green cards a little bit sooner. There are ways to import technologists, share capabilities across countries, et cetera, especially the Five Eye countries where we already have intelligence sharing.”

FT: What will it take for agencies to make up workforce gaps?

JMG: “It has to be a multi-dimensional approach.

We probably need to connect people to government missions.

Internships are great, both for the agencies to learn about people and for people to learn about agencies. The federal government would have to connect, I think, to professors and instructors so that they know more.

Google and Microsoft aren’t going to pick up everyone and we know they let people go, too.

The second area [of recruiting] is people already in the workforce. There are a lot of people in the workforce who’ve got technology-related degrees, so I think up-skilling [and] re-skilling of people already in the workforce is a way to go. It’s not for everyone. But for some people, if they have the aptitude and interest, that could be a way to jumpstart. Take a group of people, get them up to speed, and then have them help you get the next level of people up to speed.

And then the third area is in our K-12 systems. We have a Cyber Academy, now that’s going to high schools in our state. Students will be able to take courses in computer science, cybersecurity, artificial intelligence and come into college with a full year of college, as well as their high school diploma.

We’ve got to get out into the K-12 environment and not scare them off.

Industry is also willing to help government if we can move the technology implementation forward. We talked about a sort of national guard in IT, and reserves in cyber where people who could go and do their duty and help out as needed.

FT: How do agencies sell the mission or tell their story to draw talent in?

JMG: “We talk a lot about the skills we need, but I don’t know that government says too much about why they need them. What would it do for the government, and what it would do for the people that the government serves?

And I don’t think young people have a real idea of what government does. Their interaction with government is pretty limited. And their parents’ interaction is, you know, IRS. We hear about what’s going on in Washington, and it’s not always appealing.”

FT: How will the federal government struggle to determine the regulatory environment? I mean, can you even regulate something that evolves so fast and can be developed by everyday people in their own homes?

JMG: “I thought it was a great move when open AI evolved. Everybody can do this. It’s just, now, there are so many people doing it. And we have to understand who’s working on AI that doesn’t necessarily have the best interests. You can’t protect against everything, but I think we’re in for a long, long battle.

There are all these sort of lone-wolf players, gangs of people and people who just like to disrupt. And they’re very tech savvy, and they’ve largely taught themselves.

I don’t think we should try to control it totally. But we should say ‘where are the real risks?’”

]]>
imaginima
<![CDATA[White House unveils efforts to guide federal research of AI]]>https://www.c4isrnet.com/federal-oversight/2023/05/24/white-house-unveils-efforts-to-guide-federal-research-of-ai/https://www.c4isrnet.com/federal-oversight/2023/05/24/white-house-unveils-efforts-to-guide-federal-research-of-ai/Wed, 24 May 2023 14:16:24 +0000The White House on Tuesday announced new efforts to guide federally backed research on artificial intelligence as the Biden administration looks to get a firmer grip on understanding the risks and opportunities of the rapidly evolving technology.

Among the moves unveiled by the administration was a tweak to the United States’ strategic plan on artificial intelligence research, which was last updated in 2019, to add greater emphasis on international collaboration with allies.

White House officials on Tuesday were also hosting a listening session with workers on their firsthand experiences with employers’ use of automated technologies for surveillance, monitoring, evaluation, and management. And the U.S. Department of Education’s Office of Educational Technology issued a report focused on the risks and opportunities related to AI in education.

“The report recognizes that AI can enable new forms of interaction between educators and students, help educators address variability in learning, increase feedback loops, and support educators,” the White House said in a statement. “It also underscores the risks associated with AI — including algorithmic bias — and the importance of trust, safety, and appropriate guardrails.”

The U.S. government and private sector in recent months have begun more publicly weighing the possibilities and perils of artificial intelligence.

Tools like the popular AI chatbot ChatGPT have sparked a surge of commercial investment in other AI tools that can write convincingly human-like text and churn out new images, music and computer code. The ease with which AI technology can be used to mimic humans has also propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation.

Last week, Senate Majority Leader Chuck Schumer said Congress “must move quickly” to regulate artificial intelligence. He has also convened a bipartisan group of senators to work on legislation.

The latest efforts by the administration come after Vice President Kamala Harris met earlier this month with the heads of Google, Microsoft, ChatGPT-creator OpenAI and Anthropic. The administration also previously announced an investment of $140 million to establish seven new AI research institutes.

The White House Office of Science and Technology Policy on Tuesday also issued a new request for public input on national priorities “for mitigating AI risks, protecting individuals’ rights and safety, and harnessing AI to improve lives.”

]]>
Evan Vucci
<![CDATA[South Korea company fuses AI with imagery to detect ballistic missiles]]>https://www.c4isrnet.com/industry/2023/05/23/s-korea-company-fuses-ai-with-imagery-to-detect-ballistic-missiles/https://www.c4isrnet.com/industry/2023/05/23/s-korea-company-fuses-ai-with-imagery-to-detect-ballistic-missiles/Tue, 23 May 2023 19:37:08 +0000ST. LOUIS — A South Korean company specializing in satellite imagery analysis is developing new techniques to identify missiles, launchers and supporting infrastructure in North Korea, with potential applications far beyond the shared peninsula.

SI Analytics CEO Taegyun Jeon on May 22 briefed reporters on the North Korea Dynamic Ballistic Missile Operation Area Search Project at the GEOINT Symposium in St. Louis, Missouri. The company previously competed in U.S. Defense Innovation Unit challenges, including building-damage assessments and the detection of so-called dark vessels that don’t broadcast their locations nor appear in public monitoring systems.

The latest project fuses Earth-observation data from multiple commercial satellite operators with in-house artificial intelligence-augmented image analysis to detect and classify anomalies — North Korean ballistic missile operations, for example. The findings, once verified by experts, can then be shared, facilitating a government response.

“We will contribute our private sector capability and effort for a safer world,” Taegyun said. “As can be seen in the media, the news, there is increasing global stress from North Korea.”

This image released and notated by Airbus Defense and Space as well as 38 North shows the Punggye-ri nuclear test site in North Korea. (Airbus Defense and Space/38 North via AP)

North Korean missile tests rattle neighbors and far-flung nations alike. They also draw widespread condemnation. A joint statement issued this week by South Korea and the European Union described North Korean developments as “reckless” and as a “serious threat” to “international and regional peace and security.”

A meaningful dialogue is needed, it continued, as is a suspension of “all actions that raise military tensions.”

SI Analytics was established in 2018. It is based in Daejeon, with offices in Seoul and Gwangju.

]]>
ANTHONY WALLACE
<![CDATA[Geospatial-intelligence agency making strides on Project Maven AI]]>https://www.c4isrnet.com/artificial-intelligence/2023/05/22/geospatial-intelligence-agency-making-strides-on-project-maven-ai/https://www.c4isrnet.com/artificial-intelligence/2023/05/22/geospatial-intelligence-agency-making-strides-on-project-maven-ai/Mon, 22 May 2023 21:29:54 +0000ST. LOUIS — Since taking over operational control of the Defense Department’s most prominent artificial intelligence tool in January, the National Geospatial-Intelligence Agency has made “important strides” toward improving geolocation accuracy, detecting targets and automating work processes, according to its director.

Project Maven was created in 2017 to take data, imagery and full-motion video from uncrewed systems, process it and use it to detect targets of interest. The agency announced last year it would oversee operations of the program — which had been managed by the Under Secretary of Defense for Intelligence and Security — but a protracted fiscal 2023 budget cycle pushed that official transition to the beginning of this year.

“The bottom line here is that under NGA’s watch, Maven . . . has made some significant technological strides and has already contributed to some of our nation’s most important operations,” Vice Admiral Frank Whitworth said May 22 at the GEOINT Conference in St. Louis.

Within the intelligence community, NGA is the lead for processing and analyzing satellite and other overhead imagery as well as mapping the Earth. Some portions of Project Maven that do not pertain to GEOINT have shifted to the Pentagon’s Chief Digital and Artificial Intelligence Office. The effort is not yet a formal program, though the agency expects to achieve that milestone this fall.

In a briefing with reporters following his speech, Whitworth declined to offer details on how Project Maven is being used due to security concerns. He did say that military commanders are “really excited” about the tool’s growth and the agency is expanding its collaboration with academia and industry as they continue to develop the system.

Mark Munsell, NGA’s director of data and innovation, said the agency’s primary charge within Project Maven is to increase the quality of AI and machine learning algorithms and, as a result, improve their ability to detect targets within imagery.

NGA has used scenarios in the ongoing war in Ukraine to improve the AI algorithms used by Maven and other programs, he said. For example, the agency hasn’t typically trained its AI models to recognize destroyed equipment. But that information has proved relevant in Ukraine, Munsell said, and NGA is now training its models for those scenarios.

]]>
Patrick Enright
<![CDATA[Lawmakers seek special focus on autonomy within Pentagon’s AI office]]>https://www.c4isrnet.com/artificial-intelligence/2023/05/17/lawmakers-seek-special-focus-on-autonomy-within-pentagons-ai-office/https://www.c4isrnet.com/artificial-intelligence/2023/05/17/lawmakers-seek-special-focus-on-autonomy-within-pentagons-ai-office/Wed, 17 May 2023 18:29:00 +0000WASHINGTON — A bill introduced this month by a pair of congressmen would create an office inside the U.S. Department of Defense to align and accelerate the delivery of autonomous technologies for military use.

The Autonomous Systems Adoption & Policy Act would nest a so-called Joint Autonomy Office within the relatively new Chief Digital and Artificial Intelligence Office, or CDAO, according to Rep. Rob Wittman, R-Va., one of the lawmakers involved, and would shake up a status quo that is “not going to do it.”

The Defense Department is leaning into AI as a means to buttress abilities on the battlefield and in the back office. Army, Air Force and Navy leaders have all produced plans banking on the technology to augment human decision-making and firepower. Wittman and co-sponsor Rep. Dutch Ruppersberger, D-Md., say particular attention is needed on autonomy.

“We want to make sure that the technology that is out there is not just being applied in a spot, quick way,” Wittman said May 17 at the Nexus 23 defense conference, hosted by Applied Intuition and the Atlantic Council at the National Press Club in Washington. “Autonomy holds great promise in many different systems, and we don’t want it limited by how one service branch sees autonomy. We want to make sure that it’s looked at from a broad perspective.”

U.S. Rep. Rob Wittman, a Virginia Republican, speaks May 17, 2023, at the Nexus 23 defense conference at the National Press Club in Washington, D.C. (Colin Demarest/C4ISRNET)

More than 600 AI projects were underway at the Pentagon as of early 2021, according to a public tally. The joint office envisioned by Wittman and Ruppersberger would provide a single point of accountability for the incorporation of autonomy across the military, among other responsibilities.

“We’ve seen in the Pentagon where there has been movement towards autonomy at a faster pace than has happened in the past,” Wittman said. “If we’re going to have unity of purpose, it needs to be in a single place. It needs to be resourced from a single perspective.”

Wittman serves on the House Armed Services Committee and leads its tactical air and land forces panel. Ruppersberger co-chairs the House Army Caucus and is a member of a defense appropriations subcommittee.

The CDAO was established in December 2021 and hit its first full strides months later. Billed as an overseer and expeditor of all things AI and analytics, it subsumed what were the Joint Artificial Intelligence Center, the Defense Digital Service, the Advana data platform and the chief data officer’s role.

Asked if he or Ruppersberger have spoken with CDAO boss Craig Martell about the legislation, Wittman said there have been “some preliminary conversations with folks” to shape it. Defense Department officials, he said, are “situationally aware of the things we’re trying to do.”

]]>
Lance Cpl. Nathaniel Hamilton
<![CDATA[ChatGPT’s chief calls for new federal agency to regulate AI]]>https://www.c4isrnet.com/federal-oversight/congress/2023/05/16/chatgpts-chief-calls-for-new-federal-agency-to-regulate-ai/https://www.c4isrnet.com/federal-oversight/congress/2023/05/16/chatgpts-chief-calls-for-new-federal-agency-to-regulate-ai/Tue, 16 May 2023 16:42:21 +0000The head of the artificial intelligence company that makes ChatGPT told Congress on Tuesday that government intervention “will be critical to mitigate the risks of increasingly powerful” AI systems.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman testified at a Senate hearing Tuesday.

Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions with convincingly human-like responses.

What started out as a panic among educators about ChatGPT’s use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.

And while there’s no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator, but was actually a voice clone trained on Blumenthal’s floor speeches and reciting a speech written by ChatGPT after he asked the chatbot to compose his opening remarks.

The result was impressive, said Blumenthal, but he added, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”

Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them, and expressed particular concern about how future AI systems could destabilize the job market.

Pressed on his own worst fear about AI, Altman mostly avoided specifics. But he later proposed that a new regulatory agency should impose safeguards that would block AI models that could “self-replicate and self-exfiltrate into the wild” — hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.

Co-founded by Altman in 2015 with backing from tech billionaire Elon Musk with a mission focused on safety, OpenAI has evolved from a nonprofit research lab into a business. Its other popular AI products including the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.

Also testifying will be IBM’s chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI’s latest model, GPT-4, described as more powerful than ChatGPT.

“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said the panel’s ranking Republican, Sen. Josh Hawley of Missouri. “This hearing marks a critical first step towards understanding what Congress should do.”

A number of tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. In a copy of her prepared remarks, IBM’s Montgomery asks Congress to take a “precision regulation” approach, and disagreed with proposals by Altman and Marcus for an AI-focused regulator.

“This means establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself,” Montgomery said.

]]>
LIONEL BONAVENTURE
<![CDATA[Pentagon’s AI office rebooting global experiments for JADC2]]>https://www.c4isrnet.com/battlefield-tech/c2-comms/2023/05/08/pentagons-ai-office-rebooting-global-experiments-for-jadc2/https://www.c4isrnet.com/battlefield-tech/c2-comms/2023/05/08/pentagons-ai-office-rebooting-global-experiments-for-jadc2/Mon, 08 May 2023 17:23:11 +0000BALTIMORE — The Pentagon’s artificial intelligence office is reviving a series of worldwide trials meant to advance its vision of seamless connectivity and coordination, known as Joint All-Domain Command and Control.

The return of the Global Information Dominance Experiments, or GIDE, under the direction of the Chief Digital and Artificial Intelligence Office, or CDAO, comes after a months-long hiatus and amid an explosion of public interest in AI and its potential to augment humans, military or otherwise.

CDAO boss Craig Martell on May 3 said his team took the reins of the experiments to “understand what’s the right way to get after JADC2.” The office was previously charged with crafting a so-called data integration layer, which would help collect information from disparate sources and present them in a unified manner.

“We are not sitting down and writing up requirements that will get built five years from now, and then nobody will want to use it,” Martell said at the AFCEA TechNet Cyber conference in Baltimore. “Every experiment we do, it’s not just: ‘Hey, folks, did it work? Thumbs up, thumbs down.’ We’re actually building out metrics to ask, ‘Is this faster? Did we do that right? Has this number increased?’”

The multibillion-dollar JADC2 endeavor aims to link forces far afield — across land, air, sea, space and cyber — and support speedy battlefield decision-making. Such an approach is necessary, defense officials say, to deal with the military advancements of China and Russia, premier national security threats.

The first of the relaunched tests, the fifth overall, known as GIDE V, was held at the beginning of the year and featured Pentagon officials and multiple combatant commands and installations around the world.

Craig Martell, the Pentagon's chief digital and artificial intelligence officer, or CDAO, introduces himself at the AFCEA TechNet Cyber conference in Baltimore, Maryland, on May 3, 2023. (Colin Demarest/C4ISRNET)

Three more iterations, GIDE VI through GIDE VIII, are expected in 2023. Martell said he and his colleagues work “with the combatant commands on a pretty regular cadence,” aiming to dissolve barriers between regions.

“If you think about the Indo-Pacific fight, Africa Command might have some information that’s necessary. Central Command might know something about what’s going on in their domain that could be necessary,” Martell said. “So we think about this as global … and we’re asking these dataflow questions, these workflow questions.”

The initial GIDEs were spearheaded by Northern Command and North American Aerospace Defense Command. The two entities are led by Air Force Gen. Glen VanHerck.

GIDE I in December 2020 tied together Strategic Command, Transportation Command, Southern Command and Indo-Pacific Command. It also involved the undersecretary of defense for intelligence and security. GIDE II in March 2021 expanded participation and welcomed the Joint Artificial Intelligence Center, one of four entities the CDAO subsumed.

Later versions pulled in Project Maven, designed to process imagery and full-motion video from drones and other surveillance assets and to detect potential threats, and manpower from the Department of the Air Force’s Chief Architect Office.

VanHerck in 2021 said GIDE embodies a “fundamental change in how we use information and data” to maintain an upper hand.

“Right now, the threats we face and the pace of change in the geostrategic environment continues to advance at really alarming rates,” he said at the time. “We’ve entered an era of new and renewed strategic competition, and this time, we’re facing two peer competitors, both nuclear-armed, that are competing against us on a daily basis.”

]]>
Tech. Sgt. Peter Thompson
<![CDATA[Senate’s China bill to restrict advanced tech exports, bolster allies]]>https://www.c4isrnet.com/congress/2023/05/03/senates-china-bill-to-restrict-advanced-tech-exports-bolster-allies/https://www.c4isrnet.com/congress/2023/05/03/senates-china-bill-to-restrict-advanced-tech-exports-bolster-allies/Wed, 03 May 2023 19:37:16 +0000WASHINGTON — The Senate is starting work on a new bipartisan bill that will both further block China from using American capital to develop its most cutting-edge technology, and shore up Washington’s support for Pacific security partners.

Senate Majority Leader Chuck Schumer, D-N.Y., announced the plans Wednesday after directing his committee chairs to work with their Republican counterparts to begin crafting the legislation, which he hopes to move within the next several months.

“We will work to halt the Chinese government’s development of advanced technologies that we know will shape the course of this century,” Schumer told reporters at a news conference flanked by about a dozen Senate Democrats. “We need to build on the kinds of actions like the Biden administration’s export control rule to block the flow of chip-tooling technology to China.”

The Commerce Department’s Bureau of Industry and Security announced a sweeping range of export controls last year that severely curtailed China’s ability to obtain some of the world’s most cutting-edge microchips, arguing that Beijing could use them to “produce advanced military systems” through artificial intelligence algorithms. Since then, the Biden administration has persuaded the Netherlands and Japan — also leaders in advanced semiconductor technology — to instate their own export controls against microelectronics exports to China.

These semiconductor export controls hamper China’s ability to build sophisticated supercomputers needed to produce advanced AI algorithms, which could rapidly sift through data to inform battlefield decisions under compressed time frames. These advanced chips and the technology they enable are also needed for advanced civilian technology like weather forecasting and vaccine development.

Schumer noted that he’s tapped Senate Foreign Relations Committee Chairman Bob Menendez, D-N.J., and Banking Committee Chairman Sherrod Brown, D-Ohio, to begin looking at additional authorities the Biden administration could use to expand its export control regime against China.

China has set 2030 as its target date to become a global leader in artificial intelligence, with the subsequent goal of putting the People’s Liberation Army on par with the U.S. military by 2035 — a goal that U.S. export controls and the Senate’s upcoming legislation intend to complicate.

“We want to limit the flow of investment to the Chinese government,” Schumer said. “It’s incumbent on us to ensure that the U.S. is not the financial lifeblood supporting the Chinese government and its military technological advancement.”

The bill will also build upon legislation Congress passed last year to lessen U.S. dependence on China for supply chains crucial to the defense-industrial base, including semiconductors and critical minerals. That included $52 billion in annual CHIPS Act subsidies and tax incentives through 2026 meant to sway American and Asian semiconductor manufacturers to produce microelectronics within the United States.

“Now we have to build on those investments so that we are positioned to compete against the [People’s Republic of China] in the way that we need to,” Sen. Jeanne Shaheen, D-N.H., said at the news conference.

Pacific partners

Most advanced microchip production is concentrated in Asia, and the U.S. does not produce any of the world’s leading semiconductors needed for materiel such as precision-guided munitions and the F-35 fighter jet.

The Taiwan Semiconductor Manufacturing Co. is slated to begin producing the world’s most advanced semiconductors in 2026 after it opens a factory in Arizona using subsidies from the CHIPS Act. Taiwan produces about 92% of the world’s most advanced semiconductors.

“We must continue to deter the Chinese government from any conflict with Taiwan,” said Schumer, noting that the new bill will also focus on closer alignment with allies and security partners in the region.

Schumer said he intends to revisit a Taiwan bill that the Senate Foreign Relations Committee advanced last year. A $10 billion Taiwan military aid authorization from that bill made its way into the fiscal 2023 National Defense Authorization Act, though congressional appropriators have yet to fund much of that security assistance.

But concerns from the White House kept other components of the bill from becoming law, including sanctions on China for acts of aggression against Taiwan and upgrades to the island’s diplomatic status.

Senate Armed Services Committee Chairman Jack Reed, D-R.I., noted that the CHIPS Act “has provided more security to our military-industrial base.” He added that the new China bill would seek to build upon security cooperation in the Asia-Pacific region while enhancing mechanisms such as the trilateral AUKUS agreement with the United Kingdom and Australia, as well as the security group known as Quad, which also Australia as well as Japan and India.

“We want to create a force structure, which is combining all of our allies, being able to conduct operations and communication on an uninterrupted basis, in fact telling the Chinese and showing the Chinese that they would be up against the world if they tried anything,” Reed said.

The House Select Committee on the Chinese Communist Party is also preparing a series of bipartisan proposals to rapidly arm Taiwan and enhance cybersecurity with Taipei, Defense News reported last week.

The panel’s chairman, Rep. Mike Gallagher, R-Wis., told Defense News in an exclusive interview that he intends to include those proposals as amendments to the FY24 NDAA, which the House Armed Services Committee will mark up later this month.

Gallagher singled out bolstering U.S. munitions production through multiyear contracts and ameliorating the $19 billion arms sale backlog to Taiwan as some of his key defense priorities on the committee.

]]>
Chip Somodevilla
<![CDATA[US cyber leaders look to AI to augment network activities]]>https://www.c4isrnet.com/cyber/2023/05/03/us-cyber-leaders-look-to-ai-to-augment-network-activities/https://www.c4isrnet.com/cyber/2023/05/03/us-cyber-leaders-look-to-ai-to-augment-network-activities/Wed, 03 May 2023 19:10:08 +0000BALTIMORE — The increasing intricacy of military networks and the digital savvy of other world powers is making artificial intelligence and related programs more desirable for U.S. cyber leaders.

With an explosion of high-tech devices and vehicles and the vast amount of data they pass back and forth come additional security and responsiveness demands. And “anything we can do to buy down that complexity,” by employing AI and machine learning, “would be absolutely fantastic,” according to Lt. Gen. Maria Barrett, the leader of Army Cyber Command.

“We fly planes on autopilot, we land on autopilot,” she said May 2 at the AFCEA TechNet Cyber conference in Baltimore. “This is not scary to run a network in an automated way.”

Automation is a key piece of the Pentagon’s adoption of zero trust, a new cybersecurity paradigm. The approach assumes networks are jeopardized, requiring perpetual validation of users, devices and access. The practice is often likened to “never trust, always verify.”

Defense officials have imposed a fiscal 2027 deadline to implement a level of zero trust, which totals more than 100 activities, capabilities and so-called pillars.

Future Army recon helicopter will still need pilots, study finds

“Zero trust is all about looking at the data. It’s not just about the human being who logs in as an identity because, at the end of the day, that is a data element,” Barrett said. “We do, really, need to get to this place where we’re now starting to think about looking for the anomalous data that occurs in several different aspects of our network in order to identify where the adversary is much sooner, and, going back to my AI and ML piece, in an automated way.”

The U.S. considers China and Russia its most significant cyber threats. Iran and North Korea also make the list, to a lesser degree, as do other autocratic states.

Monitoring all the digital nooks, crannies and potential backdoors and loose ends is already a demanding task, made more so by “the plethora of devices out there that generate traffic,” according to Maj. Gen. Joseph Matos with Marine Corps Forces Cyberspace Command.

“You have planes, you have weapon systems, you have vehicles, they’re all generating data, they’re all generating information,” he said at the same event where Barrett spoke. “Now, how do you keep track of all that, and how do you manage all that data? Even with zero trust, that’s just an awful lot.”

More than 685 AI projects were underway at the Pentagon as of 2021, the most recent public tally. At least 232 efforts are being handled by the Army, according to the Government Accountability Office, a federal watchdog. The Marine Corps is dealing with at least 33.

]]>
Colin Demarest
<![CDATA[Generative AI providing fuel for hackers, DISA Director Skinner says]]>https://www.c4isrnet.com/artificial-intelligence/2023/05/02/generative-ai-providing-fuel-for-hackers-disa-director-skinner-says/https://www.c4isrnet.com/artificial-intelligence/2023/05/02/generative-ai-providing-fuel-for-hackers-disa-director-skinner-says/Tue, 02 May 2023 20:11:54 +0000BALTIMORE — Generative artificial intelligence, software capable of carrying a convincing, human-like conversation or crafting content like computer code with little prompting, will make hackers more sophisticated, ultimately raising the bar for U.S. safeguards, according to the leader of the Defense Information Systems Agency.

Director Lt. Gen. Robert Skinner said the technology is one of the most disruptive developments he’s seen in a long time, and has serious security implications. A similar warning was issued by the National Security Agency’s cybersecurity boss, Rob Joyce, earlier this year.

“Those who harness that, and can understand how to best leverage it, but also how to best protect against it, are going to be the ones who have the high ground,” Skinner said May 2 at the AFCEA TechNet Cyber conference in Baltimore. “We in this room are thinking about how this applies to cybersecurity. How does it apply to intelligence? How does it apply to our warfighting capabilities?”

Generative AI in recent months was popularized by OpenAI’s ChatGPT, which accrued more than 1 million users within a week of its launch. Sam Altman, OpenAI’s CEO, in March told ABC News he worries about how these models could be used for widespread disinformation and “could be used for offensive cyberattacks.”

US Air Force shifting hundreds of computer apps to the cloud

Skinner on Tuesday predicted generative AI would not be a significant tool for “high-end adversaries.” Rather, the tech “is going to help a whole bunch of other individuals get up to that level in a much faster manner.”

“So how do we have the protective systems, the security and the network capabilities to support protecting that data and support our folks?” he said.

The U.S. considers China and Russia top-tier threats in the virtual world. Other foes include Iran and North Korea, according to the Biden administration’s cybersecurity strategy, which promised the use of all instruments of national power to fend off cyber misbehavior.

Mastery of AI is thought key to enduring international competitiveness in defense, finance and other sectors. At least 685 AI projects, including several tied to major weapons systems, were underway at the Pentagon as of early 2021, the latest public tally.

DISA added generative AI to its tech watch list this fiscal year. The inventory of cutting-edge topics and gear, refreshed every six months or so, in the past featured 5G, edge computing and telepresence.

]]>
<![CDATA[Connectivity will ‘make or break’ US military use of AI, official says]]>https://www.c4isrnet.com/artificial-intelligence/2023/04/28/connectivity-will-make-or-break-us-military-use-of-ai-official-says/https://www.c4isrnet.com/artificial-intelligence/2023/04/28/connectivity-will-make-or-break-us-military-use-of-ai-official-says/Fri, 28 Apr 2023 16:00:08 +0000WASHINGTON — Leaps in military artificial intelligence and other advanced computing capabilities will be for naught if troops and battlefield systems can’t ultimately connect to one another, and do so securely, an official with U.S. Central Command said.

While the Department of Defense hails AI as a game-changer and the defense industry likewise invests and advertises its wares, it is network infrastructure, basic connectivity, that is “at the core of anything related to” the technology’s real-world adoption, according to Schuyler Moore, CENTCOM’s chief technology officer.

“Algorithms on their own are increasingly less interesting to us,” she said April 27 at a SparkCognition Government Systems event in Austin, Texas. “The question is, do they run on the network with the right classification of other data that we need? Do they run in a particular area, at a forward operating base or on a vessel where the bandwidth is, in technical terms, real bad?”

As the U.S. prepares for potential conflict with China in the Pacific or Russia in Europe it confronts a conundrum: how to link forces far afield, operating covertly or under fire. Both China and Russia are thought capable of hampering U.S. military communications and resisting its targeting and attacks.

The Pentagon is pursuing seamless networking — across land, air, sea, space and cyber — through a multibillion-dollar endeavor known as Joint All-Domain Command and Control, or JADC2. The Army, more specifically, considers network modernization one of its top priorities, alongside an overhaul of its aviation fleet and improved air-and-missile defense.

Moore on Thursday said she and others “had some really interesting, sometimes depressing, occasionally uplifting, conversations with the services about the network infrastructure that we rely on,” both stateside and overseas.

Admiral Gilday sees uncrewed vessels as critical to US Navy’s future

Reliable connections are essential to shuttling data and acting on orders derived from them. A disconnect can mean there is little to be examined and little to be relayed, leaving troops stagnant or ill-informed.

“From start to finish, if I’ve collected data at a certain point, and then I need to push it back to a home base where you can run analytics, and that pipe is severed, suddenly everything downstream of that stops,” said Moore, who previously served as the chief strategy officer for Task Force 59, an outfit designed to quickly fold AI and uncrewed systems into Navy operations.

“If you think about data being the limiting factor for maturity and function of a model, we at the edge have found that network infrastructure and function is the limiting factor for adoption and use of anything,” she said. “I will hammer on this again and again. This is the make or break of whether or not models have any impact on our operations.”

The Air Force in January expressed interest in installing always-on surveillance systems fueled by AI at sites overseen by CENTCOM, including Al Udeid Air Base in Qatar.

Such a setup would slash manpower and man-hours needed to keep tabs on foreign workers, an around-the-clock assignment, the Air Force said in documents published at the time. Al Udeid is the largest U.S. military base in the Middle East. It served as a crucial evacuation hub amid the 2021 Afghanistan withdrawal.

]]>
<![CDATA[Future Army recon helicopter will still need pilots, study finds]]>https://www.c4isrnet.com/unmanned/2023/04/19/future-army-recon-helicopter-will-still-need-pilots-study-finds/https://www.c4isrnet.com/unmanned/2023/04/19/future-army-recon-helicopter-will-still-need-pilots-study-finds/Wed, 19 Apr 2023 09:24:00 +0000WASHINGTON — Future versions of U.S. Army reconnaissance helicopters will need trained aviators to operate them well into the next decade despite advances in artificial intelligence, according to a study conducted by Mitre Corp. for service leaders.

Full-fledged autonomy would fail to “faithfully” fulfill more than three-quarters of studied tasks associated with the Army’s in-development Future Attack Reconnaissance Aircraft, or FARA, by 2030, according to the technical analysis, details of which were recently shared with C4ISRNET.

The odds aren’t much better in 2040, either. At least 10 “high-risk” and 18 “medium-risk” challenges hampering no-pilot deployment were identified, suggesting human input — in the actual advanced rotorcraft, or beamed in from afar — will continue to be relied upon for complex, high-stakes military endeavors.

Maj. Gen. Walter Rugen, the director of the Army’s Future Vertical Lift Cross-Functional Team, said the findings will help determine how development money is spent.

“They’re very informative to a policy guy like me, that has to decide where our investments go,” Rugen said at a February event hosted by Mitre, which manages federally funded research and development centers.

Autonomous tech can help keep US homeland safe, NORAD’s VanHerck says

The team Rugen leads is tasked with helping overhaul the Army’s aging airborne fleet, among other heavy lifts. The portfolio includes FARA, the Future Long-Range Assault Aircraft, or FLRAA, and future tactical unmanned aircraft systems and air-launched effects.

The Army in December selected Textron’s Bell unit to build FLRAA, a $1.3 billion deal that marked the service’s largest helicopter procurement in 40 years. The choice has since been protested by Lockheed Martin’s Sikorsky. A ruling from the Government Accountability Office is expected no later than April 7.

A contractor has not yet been selected to formally build FARA, which has earned the “knife fighter” moniker and is planned to succeed the Kiowa scout helicopter, retired nearly a decade ago. AH-64 Apache attack helicopters paired with Shadow UAS are filling the gap now.

The deep-dive conducted by Mitre, which tapped into development documents, academic publications and Army metrics and relied on interviews with air cavalry, “really helps us define” what’s possible in the near- and mid-terms, Rugen said.

“What I’ve kind of seen is, in many respects, the soldier still is our best sensor,” he added. “The soldier at the tactical edge is going to be quicker through the mid-term, through that 2030 time, than the computer.”

While uncrewed drones are well-equipped for what Rugen called “dull, dirty or dangerous” work — circling and forever staring, or probing chemically contaminated spaces — something like FARA is meant for more sophisticated tasks, applications that demand finesse, expertise and in-the-moment judgement.

“As we look at our drones, we’re talking about an extension of our sensor. And we’re talking about, in this report, really the hardest thing we do on the battlefield, which is fight for information,” Rugen said. “Reconnaissance is our toughest thing that we’re doing. And it’s hard to outsource that, certainly in the mid-term, to some autonomous agent.”

Maj. Gen. Walter Rugen, the director of the Future Vertical Lift Cross-Functional Team, listens to a speech Oct. 12, 2022, at the Association of the U.S. Army annual conference in Washington, D.C. (Colin Demarest/C4ISRNET)

Among the factors that bar an empty FARA cockpit from reality are immature perception, decision-making and intent-determination capabilities, according to the assessment.

The trio are incredibly important to get right, and get right every time, according to John Wurts, a senior autonomous systems engineer at Mitre.

“The question becomes the importance of a reconnaissance mission,” he said at the same event where Rugen spoke. “We talked about information, and what’s most important to a reconnaissance mission is coming against the commander’s intent: understanding what are your reconnaissance objectives, what merits the threshold of reporting, what do you need report about either allied troops, enemy troops, terrain information, and at what points in the mission?”

Training reliable AI requires massive amounts of time, data and exposure. It’s difficult enough on the civilian side: this is a stop sign, this is a bus, this is the quickest route home. Things only get more complex in a military setting, when bullets are whizzing by, people are dying and the choices presented are not binary.

“A human can express their own tactical curiosity, understand second- and third-order effects, understand how to adapt to an adversary action,” said Wurts, who previously worked in the auto industry. “When we ask for a no-pilot configuration to operate the same set, that needs to all reside in the autonomy.”

Striking a balance may be the key.

Autonomy in the cockpit

As the Air Force increasingly hypes manned-unmanned teaming and seeks 1,000 so-called collaborative combat aircraft to swell its ranks, and the Navy envisions a future fleet teeming with uncrewed vessels, so too is the Army looking at ways of augmenting its troops with computer-powered might.

“If we’re going to posit that we do want inhabited cockpits, but we want more autonomy in those cockpits, I think that’s where we’re seeing some tremendous technology,” Rugen said. “Again, when we talk about some of the limits, we really see the machine not having the curiosity that humans do — what makes us human.”

The Defense Department considers AI a modernization priority and has invested in it, though the exact sum is unclear. AI is often a slice of a larger program, and classified activities can muddy the disclosure waters.

More than 685 AI projects, including some associated with major weapons systems, were underway at the Pentagon as of early 2021, the most up-to-date tally, the Government Accountability Office said.

The Army, the largest military service, is leading the pack. At least 232 efforts can be traced back to it, according to the federal watchdog. The Marine Corps, on the other hand, is dealing with at least 33.

What is ‘deep sensing’ and why is the US Army so focused on it?

AI is expected to aid target recognition aboard the Army’s Optionally Manned Fighting Vehicle, or OMFV, help sort and send information beamed to its Tactical Intelligence Targeting Access Node, or TITAN, and underpin the navigation of robotic combat vehicles, or RCVs, designed for scouting and escorting.

The technology is also being used to streamline logistics and offload monotonous, time-consuming or finicky tasks. The Army in September selected BigBear.ai for a $14.8 million contract to roll out the service’s Global Force Information Management system, designed to give service leaders an automated and holistic view of manpower, equipment, training and readiness. In October, the service picked Palantir for a separate $85.1 million predictive modeling software contract to get ahead of maintenance needs.

Army Chief of Staff Gen. James McConville in late February told reporters conflicts would be waged increasingly by a combination of man and machine. And for FARA, that is more likely the case: a crew assisted by digital prowess, increasing performance and reducing the chance of sensory overload.

“When I look at manned-unmanned teaming, that’s going to even become more prevalent,” McConville said at a Defense Writers Group event. “It’s going to be unmanned-manned teaming on the ground, in the air and really a combination of both, and it’s going be ubiquitous throughout the battlefield.”

The Army’s fiscal 2024 budget request, totaling $185.5 billion, sets aside $283 million for AI.

The funding, budget documents state, would cover research and development “for enhanced autonomy experimentation” as well as AI-enabled activities tied to OMFV, TITAN, RCVs and info-processing.

Defense News reporter Jen Judson contributed to this article.

]]>
Bell
<![CDATA[Leonardo CEO pick Cingolani may signal cybersecurity, AI focus]]>https://www.c4isrnet.com/global/europe/2023/04/12/new-leonardo-ceo-pick-cingolani-may-signal-cybersecurity-ai-focus/https://www.c4isrnet.com/global/europe/2023/04/12/new-leonardo-ceo-pick-cingolani-may-signal-cybersecurity-ai-focus/Wed, 12 Apr 2023 21:36:09 +0000ROME — The Italian government nominated Roberto Cingolani as the new CEO of state-controlled defense giant Leonardo on Wednesday, ending weeks of speculation over who would take over from incumbent Alessandro Profumo.

The naming of Cingolani, 61, was part of a sweep of new hires by the government at key state controlled firms and coincided with the end of Profumo’s second mandate at the helm of Leonardo.

Former NATO Senior Civilian Representative to Afghanistan Stefano Pontecorvo will be the new Leonardo chairman.

The appointments must now be formally approved by shareholders. A trained physicist, Cingolani initially joined Leonardo in 2019, becoming chief technology and innovation officer and spearheading cyber and AI programs before he was named Italy’s ‘green transition’ minister in 2021 by former Italian prime minister Mario Draghi.

When Draghi’s government fell last year to be replaced at September elections by a coalition led by incoming prime minister Giorgia Meloni, Cingolani was asked stay on as an energy advisor to the new government, a role he has combined with rejoining Leonardo to work on space programs.

His appointment will likely see Leonardo continue its focus on cybersecurity programs. Cingolani was favored for the CEO job by prime minister Meloni despite her defense minister Guido Crosetto pushing for the appointment of Lorenzo Mariani, the head of Italian operations of MBDA, the European missile firm in which Leonardo has a stake.

While industry insiders said Cingolani had the vision required to run a global firm, Mariani was seen as having better inside knowledge of the company and able to step into the role faster after having already run a European defense firm.

Outgoing CEO Profumo worked in banking before he took over Leonardo in2017 from Mauro Moretti, who had previously run Italy’s rail network.

Both men needed time to learn the ropes at Leonardo.

After six years on the job, Profumo is bequeathing Cingolani a firm indecent financial shape after its new orders rose 21 percent year on year to 17.266 billion euros last year, beating its forecast of €16 billion, or $17.6 billion.

]]>
JOHN THYS