xTechScalable AI 2

The U.S. Army invites U.S.-based Artificial Intelligence (AI)-focused small businesses to participate in the xTechScalable AI 2 competition, a competition for eligible small businesses across the U.S. to engage with the Department of Defense (DoD), earn prize money, participate in a business accelerator program, and submit a Phase I or Direct to Phase II Army Small Business Innovation Research (SBIR) proposal.

    DESCRIPTION

    The U.S. Army invites U.S.-based Artificial Intelligence (AI)-focused small businesses to participate in the xTechScalable AI 2 competition, a competition for eligible small businesses across the U.S. to engage with the Department of Defense (DoD), earn prize money, participate in a business accelerator program, and submit a Phase I or Direct to Phase II Army Small Business Innovation Research (SBIR) proposal.

    The Assistant Secretary of the Army for Acquisition, Logistics and Technology (ASA(ALT)) is partnering with Program Executive Office Intelligence, Electronic Warfare & Sensors (PEO IEW&S) to deliver the xTechScalable AI 2 competition. The Army recognizes that the DoD must enhance engagements with small businesses by (1) understanding the spectrum of world-class technologies being developed commercially that may benefit the DoD in the artificial intelligence space; (2) integrating the sector of non-traditional innovators into the DoD Science and Technology (S&T) ecosystem; and (3) providing expertise and feedback to accelerate, mature, and transition technologies of interest to the DoD.

    The xTechScalable AI 2 competition will consist of four rounds:

    (1) Call for concept white papers;

    (2) Virtual technology pitch event;

    (3) Final demonstration; and

    (4) Opportunity to submit a Phase I or Direct to Phase II Army SBIR proposal.

    The competition will award up to $603,000 in cash prizes to selected participants. Up to 16 finalists will receive a cash prize of $10,000 each and an invitation to pitch their innovative technology solutions to a panel of Army and DoD subject matter experts during an in-person event at or around the 2024 Association of the United States Army (AUSA) Annual Meeting and Exposition in October 2024, in Washington, D.C. The competition will select up to 12 final winners, who will receive a cash prizes: 1st place, $100,000; 2nd place, $50,000; 3rd place,$25,000; and 4th through 12th will receive a cash prize of $12,000 each. All final winners of the competition may submit either a Phase I SBIR proposal of up to $250,000 each or a Direct to Phase II SBIR proposal of up to $2,000,000 each.

    In addition to non-dilutive cash prizes, participants will have the opportunity to engage with Army and DoD representatives through information-sharing and networking opportunities. Finalists will be entered into an optional xTech Accelerator to receive intensive mentorship and access to networking events to assist in growing their companies for DoD and commercial use. Details on the prize structure are listed in the announcement below.

    The efforts described in this notice are being pursued under the authorities of 10 U.S.C. § 4025 to award cash prizes recognizing advanced technology achievements in the field of Scalable AI. All winners will be eligible to submit for a Scalable AI Army SBIR Phase I or Direct to Phase II award under the provisions and requirements of 15 U.S.C. § 638.

    While the authority of this program is 10 U.S.C. § 4025, the xTechScalable AI 2 competition may generate interest by another U.S. Army, DoD or United States Government (USG) organization for a funding opportunity outside of this program (e.g., submission of a proposal under a Broad Agency Announcement). The interested organization may contact the participant to provide additional information or ask for a request for proposal in a separate solicitation.

    All xTechScalable AI 2 competition submissions are treated as privileged information, and contents are disclosed to government employees or designated support contractors only for the purpose of evaluation and program support.

    The xTech Program will provide feedback from evaluators to participants during each part of the competition. The purpose of providing this feedback is to accelerate transition of the technology to an Army end-user by providing insight on best applications for the technology, suggestions for product improvement for Army use and recommended next steps for development. However, the government may not respond to questions or inquiries regarding this feedback.

    TOPICS AND PROBLEM STATEMENTS

    The xTechScalable AI 2 competition seeks cutting-edge technology solutions that will allow the Army to leverage the power of AI and operationalize it at scale, driving significant advancements in military capabilities while addressing complex challenges and enhancing national security. The competition seeks technology solutions that fit within one of the three topic areas:

    Topic 1: Scalable tools for automated AI risk management and algorithmic analysis:

    As the Army deploys AI systems, there is an inherent risk that the AI model could fail to perform as expected. AI algorithms are complex and have many factors that can affect their performance, some of which include Malware, Data Poisoning, Model Evasions, Mode Inversions, and Deepfake attacks. These factors could lead the AI model to make incorrect inferences which could have significant mission impacts. The Army seeks to develop automated tools to evaluate AI system risk. Specifically, the Army is looking for new methods to evaluate, quantify, and mitigate risk against an AI Risk Management Framework (RMF) to ensure deployed AI models are trusted and validated. Tools also need to be automated to reduce the cognitive workload required from the warfighter to validate AI model factors against an AI RMF. This need extends across multiple modalities and model types, to include imagery, synthetic aperture radar, large language models and radio frequency data. There are multiple challenges for quantifying AI risk in the DoD domain; this effort is meant to begin addressing some of those challenges — build a baseline characterization of risk-related performance of pre-trained models; develop preliminary DoD-specific benchmarks for a set of DoD-related tasks/prompts; and document the divergence that occurs with fine-tuning steps by factors such as model type, data modality, and inference engine. The Army is aware of existing open-source commercial tools related to cybersecurity and AI risk management. However, an automated tool that adapts commercial experiences and open-source methodologies for military use is needed as testing and evaluation for the resulting tools will be derived from Army use cases.

    The Army will accept proposals on any AI RMF challenge requiring the application of scalable AI techniquesHowever, the Army will prioritize submissions addressing the following core need areas for award to maximize impact and scalability across Army AI model development and deployment:

    • Automated tools that can identify multiple dimensions of AI risk, classify AI risk, quantify AI risk, and propose mitigation options that reduce overall risk to the government deployment of AI systems (including open-source data sets or “black box” models).
    • Automated tools that can accept risk-related inputs from multiple data sources (e.g., model design, model outputs, source code, and data infrastructure) and modalities (e.g., imagery, text, and radio frequency).
    • Automated tools with standardized evaluation methods and mitigation strategies to enable full scalability across the army enterprise.
    • Automated tools that can be used across multiple Army units from the Program Office to end users.

    Topic 2: Scalable techniques for robust testing and evaluation of AI operations pipelines:

    As the Army moves toward maximizing industry advancement for delivery of AI products, solutions, and services, a robust and automated Test & Evaluation (T&E) approach is needed across AI Operations Pipelines. The ability to assess industry AI products, open-source solutions, and government-built solutions generated to support AI Operations is critical to keep pace with innovation. However, there are multiple factors that make building AI operations pipelines in the DoD domain uniquely challenging. The DoD must operate with data and systems at varying classification levels and network configurations. Any resulting products or solutions must also comply with stringent rules for obtaining and maintaining an Authority to Operate. Key metrics may include speed (e.g., task, workflow, efficiency, model latency); accuracy; model size (e.g., number of parameters, processing need, storage); authority of the source; model sensitivity to prompts; the creativity setting allowed for the LLM outputs (e.g., “full factual” to “full imaginative”); effectiveness of Retrieval-Augmented Generation; and other configuration factors that impact performance.

    The Army will accept proposals on any T&E challenge requiring the application of scalable AI techniquesHowever, the Army will prioritize submissions addressing the following three need areas for award to maximize impact and scalability across Army AI model development and deployment.

    • Data Integrity: It is essential to carefully curate and maintain training datasets to ensure robust and reliable machine learning models in real-world applications. However, over time, the operational environment can change significantly, making old training data less representative of the current situation and potentially leading to inaccurate model performance. Data drift can manifest in various ways, such as: change in distribution, change in feature relevance, and presence of new classes or outliers. To address this, the Army is interested in:

    o   Automated tools to identify and evaluate data integrity inside government training data repositories.

    • Data Labeling: Accurate, reliable, and automated data labeling methodologies are critical components of building machine learning models that are capable of performance in real-world scenarios. To facilitate this capability, the Army is interested in:

    o   Automated tools to assess the quality, consistency, and accuracy of labels applied to training datasets.

    • Model Training: Evaluating model performance is a critical part of the Army’s strategy to deliver trusted AI. The Army is interested in innovative T&E research related to model training for the following areas:

    o   Resource consumption: Compute, storage, and energy resources required for deploying, operating, and maintaining an AI system over its entire lifecycle.

    o   Robustness: Tools to assess how well the model performs under various conditions, such as extreme inputs or when data is noisy.

    o   Scalability: Tools to evaluate how well the model performs when dealing with large datasets, multiple input/output features, and various data sources.

    o   Privacy and Security: Tools to ensure that the AI system adheres to strict privacy regulations and does not leak sensitive information from training or test data.

    Topic 3: Scalable techniques for center of mass and course of action analytics for intelligence preparation of the battlefield:

    Visualization of enemy equipment and unit entities on a map is critical for efficient military decision making. Unfortunately, sensors often acquire high volumes of data that bury maps in a “sea of red,” making the display of individual entities burdensome and not easily understandable. The Army technical problem can be broken down into several areas as it relates to Multi-Domain Operations (MDO). First, current collection plan generation is performed in a silo approach based on mission objectives. Often, it is completed through spreadsheets and PowerPoint. Second, these collection plans are not visible or sharable to entities outside of the unit organizations that create them. This leads to inefficiencies and decreased timeliness of critical information. Lastly, collection plans are mostly generated manually. This requires multiple human generated steps to develop an optimized collection plan and often has no relationship to other collection plans that may have similar objectives.

    The purpose of this topic is to demonstrate how novel approaches and techniques can address these challenge areas and to develop AI algorithms and prototypes to simplify data visualization. The Army is interested in a Center-of-Mass algorithm that can group organizationally related entities together for display purposes. This algorithm must also be easily transitioned into Program Manager Intelligence Systems and Analytics (PM IS&A) products. This technology is important for Intel and validating Course of Action. The Center-of-Mass algorithm must understand entity relationships, what units and equipment can be grouped together (tanks and BMPs versus tanks and re-supply vehicles), terrain and hydrology limitations (the center of mass cannot be in the middle of a lake), and what constitutes a certain echelon (three-four tanks is an armor platoon, a tank and three BMPs is a motorized rifle platoon, etc.). The Center-of-Mass algorithm will be used to determine echelon, composition type (armor versus artillery), strength and direction over time. This can then be compared to a situation template (SITEMP) with time phase lines to perform enemy Course-of-Action (COA) validation. COA validation can include whether expected avenues of approach and enemy force composition and strength are valid, if NAIs are appropriately placed, and actual versus planned enemy movement rates.

    SCHEDULE AND PRIZES

    PHASE 1: Part 1: Concept White Paper

    Mar 12, 2024 - May 17, 2024

    Up to 32 Semifinalists

    $5,000 each

    PHASE 2: Part 2: Virtual Technology Pitches

    Aug 5, 2024 - Aug 9, 2024

    Up to 16 Finalists

    $10,000 each

    PHASE 3: Part 3: Finals Demonstration

    Oct 14, 2024 - Oct 16, 2024

    Up to 12 Winners

    1st place: $100,000; 2nd place: $50,000; 3rd place: $25,000; 4th-12th place: $12,000

    PHASE 4: Part 4: Request for Phase I or Direct to Phase II SBIR Proposal

    Oct 22, 2024 - Nov 19, 2024

    Up to 12 Selectees

    $250,000 each or $2,000,000 each

    ELIGIBILITY

    Small, for-profit, independent U.S. businesses. Restrictions exist about (1) the type of firm; (2) its ownership structure; (3) the firm’s size in terms of the number of employees; and (4) prior, current, or pending support of similar proposals or awards, as follows:

    (1) Type of Firm: An eligible firm must be organized as a for‐profit concern and meet all the other small business requirements in 13 C.F.R. § 121.702. Non‐profit entities are not eligible.

    (2) Ownership and Control: A majority (more than 50%) of an eligible firm’s equity (e.g., stock) must be directly owned and controlled by one of the following:

    1. One or more individuals who are citizens or permanent resident aliens of the U.S.;
    2. Other for‐profit small business concerns (each of which is directly owned and controlled by individuals who are citizens or permanent resident aliens of the U.S.); or
    3. A combination of (a) and (b) above.

    Note: If an employee stock ownership plan owns all or part of the concern, each stock trustee and plan member is considered an owner. If a trust owns all or part of the concern, each trustee and trust beneficiary is considered an owner.

    (3) Size: An eligible firm, together with the affiliates, must not have more than 500 employees.

    (4) Prior, Current, or Pending Support with Similar Technology: Proposals submitted in response to this prize competition must not be substantially the same as another proposal that was funded, is now being funded, or is pending contract award with another federal agency. Small businesses with any question(s) concerning prior, current, or pending support of similar proposals or awards must disclose those as early as possible to the xTech Program Office.

    xTechScalable AI 2 Project Linchpin

    xTechScalable AI 2

    Scroll to Top