Introduction
Several elements of intelligence and sub-fields exist under the auspices of Artificial Intelligence (AI) such as Machine Learning (ML), Neural Networks (NN), and Deep Learning (DL). Software Testing Help (2022) asserts humans as the most intelligent species on this earth as they can solve any problem and analyze big data with their skills like analytical thinking, logical reasoning, statistical knowledge, and mathematical or computational intelligence. After purporting humans as “the most intelligent” the author supports the claim referring to AI working on the basis of operations of the human brain while continues describing the diverse fields of AI and the impacts of its characteristics and capabilities on SwA evolving over time. This work aims to define AI and distinguish each of its domains, SwA in particular. Finally, the paper discusses applications of software assurance in AI-based systems. The main motive of this study is to seek more theoretical and terminological clarity into the role of machine learning and AI to build agents that mimic human intelligence in an efficient and intelligent manner. In today’s ever-emerging era of technology, neural networks are the hottest area of AI and machine learning that deals with natural language processing, voice and text recognition, and image processing. The field of neural networking is a series of algorithms that mimic the operations that resemble the connections of neurons of a human brain. In simple words, neural networks are a collection of connected units of artificial neurons. Therefore, neural networks are contributing the most in the area of AI languages that work effectively with maximum potential to decrease human effort replacing it with intelligent agents, AI-based machines.
Artificial Intelligence/ Machine learning
What is Artificial Intelligence (AI)?
At some point in our daily lives, most of us use Cortana, Siri, Bixby, or even Google Assistant which helps us find useful information through the use of our voice. We can command “Hey Siri, turn off the alarm” or “Hey Google, play “White Christmas” by Bing Crosby” and Google plays it for you. These AI technologies like Google, Alexa, or Siri have become our personal digital assistants that help us find useful information and perform tasks over the technology of the internet when needed. They respond to our queries with relevant information by either searching the millions of websites or going through the minute options and applications installed on our phones. This is one of the simplest examples of AI and its usage in the digital world.
The reason why the field of AI is difficult to define is that different definitions are derived from multiple paradigms and are often dependent on context. The first paradigm is human performance centered and the other depends on reasoning power as well as thought processes where man and machine try to “surpass” each other. To that end, one may have difficulty separating where the machine begins, and man ends. This is the way in which AI is transforming the way the Department of Defense is approaching the future battlefield and the pace of threats we must face in terms of DoD AI strategy. This strategy directs the Department of Defense (DoD) to accelerate the adoption of AI practices for protecting the nation’s security, improving living standards, deterring war, and ensuring a force-fit future for the next generations that would have been built upon individual freedoms (Mattis, 2018).
Evolving Definitions of Artificial Intelligence
As with any concept, it is presently difficult to define the discipline of AI in a detailed and generally accepted form, there are certain definitions combed through different reputable and authentic resources depending on the different information asked and according to different paradigms. The introductory definition of AI is meant to reflect the discipline of engineering and science for making smart and intelligent machines, advanced computer systems in particular to understand human intelligence. Although these advanced machines mimic human intelligence, the field does not confine itself to the biologically observable method (McCarthy, 2004). At the 1956 Dartmouth Summer Research Project on Artificial Intelligence, Herb Simon and Alan Newell demonstrated the Logic Theorist, a system that was able to prove theorems in symbolic logic (Simon, 1983). This event marked the beginning of AI as a field of study and is often considered the first foray of AI into high-order intellectual processes. The conference, which was organized by Marvin Minsky and John McCarthy, also introduced several other systems (McCarthy et al., 2006).
Today, modern-day definitions of AI consider it a sub-field of computer science that implies how advanced machines can imitate human intelligence in a smart and intelligent way. In another paradigm, AI is “the theory and development of computer systems that are able to perform tasks normally requiring human intelligence such as decision making, perception, translation between languages, and speech recognition”(Oxford Languages | The Home of Language Data, n.d.). Finally, the National Artificial Intelligence Initiative Act of 2020 asserts, “The term ‘artificial intelligence’ means a machine-based system that can, for a givenset of human-defined objectives, make predictions, recommendations or decisions influencing real orvirtual environments. Artificial intelligence systems use machine and human-based inputs to – (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action” (National Artificial Intelligence Initiative, 2023). Thus, the discipline of AI involves processes that collect knowledge and information while pretending or acting to be a human to use that learning to adapt to the different scenarios in the surroundings or new environments. As defined in the DoD AI strategy “AI refers to the ability of machines to perform tasks thatnormally require human intelligence – for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action – whether digitally or as the smart software behind autonomous physical systems.”
Moreover, in its most naive form, AI is a discipline that syndicates the field of computer science and robust datasets comprising AI algorithms to enable problem-solving through “man-made computational devices and systems which would be inclined to call intelligent” (Brachman, 2005). In the contemporary world of technological advancement, there is optimism surrounding the development of Artificial Intelligence (AI) for product innovations in various areas of life. The peak of inflated expectations expected is a common phenomenon with emerging technologies in the market or domain. As a result, the AI industry has seen a surge in development andadvancement, led by leaders in the field of computer science (THORNBERRY, 2022). AI development seeks to replicate the human mind and intelligence to create improved services for humans. The goal is to create automated, authentic, and personalized services for users and customers with the help of AI technology.
Types of Artificial Intelligence
Artificial Intelligence (AI) has been evolving since its conception in the 1950s. AI has become an integral part of many aspects of everyday life and has played a crucial role in the advancement of modern technology. Alan Turing is widely regarded as a pioneer in the field of AI, having made significant contributions such as the Turing Test, which is used to evaluate an AI’s ability to think and behave like a human. AI is typically divided into two main categories: weak AI, which refers to systems that are designed to perform specific tasks such as facial recognition and language translation, and strong AI, which refers to systems that can perform complex tasks and make decisions based on a range of data. AI can further be separated into four main types as defined by Arend Hintze, researcher and professor of integrative biology at Michigan State University (https://theconversation.com/understanding-the-four-types-of-ai-from-reactive-robots-to-self-aware-beings-67616).
Reactive Machines
Reactive machines, the AI systems, have no memory and always require an input that in turn delivers the same output. These machines are task-specific and only react to the world in the ultimate present scenario as they cannot use past experiences in order to inform precise decisions.
Limited Memory
This type, of limited memory, refers to machines that can look into the past by monitoring specific objects over a period of time. Self-driving cars are the prime example of limited memory machines that observe how other cars operate in a certain direction and speed to make decisions when such pre-programmed cars tend to change lanes. However, the information-limited memory machines contain is only transient as these machines cannot learn from past experiences as human beings do.
Theory of Mind
This type of AI is based on the notion of Psychology that people and objects in the world have distinctive thought patterns and emotions that affect their environment, ways of living life, working, and behavior. Thus, this type refers to the understanding that if AI-based models have to live among humans, they would have to change their behavior patterns as per the feelings and expectations of humans as well as other creatures and vice versa.
Self-awareness
Finally, this type refers to the development of AI to the levels where systems would be built and form specific representations about themselves while possessing the “theory of mind” to adopt behaviors in order to build ‘conscious’ machines.
Four Categories of Artificial Intelligence
AI precisely ascribes the feature of intelligence in different categories to find how to make machines solve kinds of problems, form concepts and abstractions, use or translate languages, and simulate machines on the intelligence that is reserved for human beings. The four categories of DoD that can utilize Artificial Intelligence include:
Decision Support Augmentation
A decision support algorithm is a software program or set of rules used by AI to makedecisions. The primary contributor to the development of such algorithms is computer scientistand AI expert, Lotfi Zadeh. Zadeh’s work on fuzzy logic and the development of decisionsupport algorithms has been integral in the advancement of artificial intelligence technology. DSAs use a variety of data sources, such as historical data, current data, or even artificialintelligence, to provide analysis or recommendations to aid decision-makers in making informed decisions. DSAs can be used in a wide range of industries, such as finance, healthcare, or retail, to help provide insights that can help optimize decision-making.
Augmented Analytics
Enabled by cloud technologies takes the best out of the world of emerging business intelligence in bringing decision-making to a level where important real-time data is produced for intelligent automated decision-making. Companies continuously generate new valuable data through the processing capabilities of people working in those companies to make balanced and rational business decisions with the minimum or zero possibility of human-madebias and errors. It is because, in the age of big data, critical business decisions are subjected to intelligent automated resources so making wrong decisions and biases can be costly. Thus, it isimportant that decisions should be based on unbiased and non-judgmental AI. This would increase the possibility that human beings can feel trusted, safe, and secure while sharing personal information with better possible outcomes to achieve a prime cognitive synthesis forthe machines. A crucial part of this augmented intelligence or analytics system is its ability tocollect complex data that is more personalized to each human through the pre-programmedspecific principles to enhance insight during the decision-making process within the AI foundation (Freedberg Jr, 2015).
Digital Virtual Assistants Communicating through Natural Human Language
No individual in this world has enough capacity and cognitive skills to access, comprehend, or store all the information in existence because it takes a lifetime to fully comprehend and evaluate all perspectives of a single project only under high levels of cognitive complexity skills. There are heterogeneous perspectives about the immense size of the world, geographical positioning, cultural differences, socioeconomic history, and knowledge skills that can never be contested let alone the unlimited mind space to comprehend a single human. To overcome this problem, AI has become our own virtual assistant for speech recognition that uses natural human language to communicate the answers to the queries requested by the users. It all starts with words when users “wake” words of their personal virtual assistants that are available on tablets, mobile phones, computers, and even on standalone devices and they respond by listening to their users’ requests.
The AI in those devices compares the request made by the user in the database, split it into separate commands, understands the commands, and follows the actions provided by the database to find the correct output. These personal digital assistants exactly learn the way humans learn. For instance, if you have asked Siri, an AI technology in mobile phones, to make a call to a person named George and she has dialed the number saved with the name Judge, you would tell it to “Stop” and she would immediately understand that she has made a mistake. Next time, Siri would use this feedback for improvement and better outcomes can be expected. This is AI as the assistant “Siri” used AI to understand human language, communicated with the in-built call function in your phone, waited for the response, and output the desired action. It means that these virtual assistants use speech recognition software and natural language processing to work effectively.
Natural Language Processing and Speech Recognition
It is the branch of AI that deals with building machines to recognize and react to words, texts, or voices in almost the same way a human being can. It interprets and manipulates human language in the form of voice data or text using software to comprehend the natural way of human communication. According to a study conducted by the Department of Computer Science and Engineering at the University of North Texas (UNT), the usage of AI-powered tools for automated translation is rapidly increasing. AI-powered automated translation tools are becoming increasingly popular for their ability to quickly and accurately translate large amounts of data, with the use of machine learning, natural language processing, and neural networking. A study conducted by the Department of Computer Science and Engineering at the University of North Texas (UNT) found that the usage of these tools is rapidly increasing (Lang, 2020). These tools are providing a valuable resource to bridge the language barrier across nations. Speech recognition also known as “Automatic Speech Recognition (ASR)” is an inter-disciplinary subfield of computational linguistics that reliably converts voice data into text data (Reshamwala et al., 2013).
Voice and Style Guide systems use AI and Natural Language Processing (NLP) to interpret human language. AI specifically ComputationalLinguistics develops technologies and methodologies that categorize words, enable the recognition of voice and words, and convert speech into text. NLP is then used to decode the human voice, comprehend the message, and convert it into text with a certain confidence level. This allows the virtual assistant to identify the user’s intent, the context of the conversation, and the meaning of the sentence. In summary, Voice and Style Guide systems use AI and NLP to interpret human language, enabling the recognition of voice and words and the conversion of speech into text. This in turn allows virtual assistants to recognize the user’s intent, context, and meaning.
Voice to Text – Decision Automation Translation, Image Processing, and Classifying Content
Among the variety of domains in AI, voice-to-text is a significant feature within the speech service that accurately and quickly transcribes speech into text in different languages and variants in existence. Decision automation is an infrastructure that uses the rules and strategies of AI to help organizations or corporations automate decision-making through specific triggers. It is the use of software to convert speech into text to let organizations automatically make choices that could have been made by a human. Decision automation through the use of AI exposes decision-making technologies through which it codifies its principles for the NLP system. In terms of AI in decision automation translation, automation is an incredibly powerful tool that enables users to configure automatic projects based on the determined trigger to carry out follow-up actions.
Furthermore, AI helps improve the user experience by classifying the requested content into a bucket category under the appropriate topics. Supervised automated text classification asks devices to imitate human intelligence by assigning a predefined label to perform data classification at scale. Classification of content into categories makes the whole process efficient and super-fast through the applications of AI with clear rules and algorithms of ML. Content classification is one of the most useful technologies that have enormous potential in the coming future because it is important to analyze and classify information through intelligent machine algorithms as more and more information is dumped on different websites over the internet.
Image Processing
Image processing is another engine room of AI automation that interprets the unstructured dataset captured through sensing such as speech data, videos, and images. Image processing is an interdisciplinary field that uses algorithms to manipulate, analyze, and transform digital images. It involves the use of a variety of methods, to extract and analyze information from images. Image processing is used in many different applications, including medical imaging, facial recognition, machine vision, and remote sensing. It is also used to enhance the quality of digital photos and videos (Wikipedia Foundation, 2023). The technology that extracts the data from images for image processing through the application of a set of techniques and algorithms is known as optical character recognition and cognitive OCR. Image processing has further two sub-categories that include object detection and image classification. The former categorizes the pixels in the image to extract a kind of information from the image and the latter identifies and locates the presence of an object in the image. Facial and image recognition in advanced devices are the hailed applications of AI that verify the face of an individual for unlocking the device. Image processing and AI in this regard can be seen as interdependent because AI models are mostly built upon image processing and AI helps develop image processing through open-source libraries (Venegas-Andraca & Bose, 2003). In a nutshell, decision automation, voice-to-text transcription, image processing, and content classification go hand in hand because of the exponential increase in the availability of information found online.
Robotics and Advanced Products
The practical advances of AI and ML can be seen in robots, autonomous vehicles, and drones which are the latest must-haves. In the next decade, drone driverless vehicles would be used to transport people and goods in a safe and efficient way.
- Autonomous vehicles use unreliable sensor data to make accurate predictions of the location of vehicles around the world. Such vehicles use a specific algorithm known as Simultaneous Localization and Mapping (SLAM) that employs a map to track the information regarding the quickest route while integrating all the algorithms between two points in the world.
- Robots use AI and advanced computer vision to predict and track the attitudes of human beings. These robots after detecting humans’ behaviors plan their movements based on the observations, patterns of activities, and layouts of machines that help the system operate efficiently and safely alongside human workers in the warehouses. Each robot through the applications of AI is human-aware, self-optimized, and learns from self-collected fully scalable data over time. Autonomous drones, also known as Unmanned Aerial Vehicles (UAVs), operate through two potentials, operational and navigation software to revolutionize flying machines (Freedberg Jr, 2015). The goal of autonomous drones through the use of AI is to unlock the full potential of flying machines for making efficient use of large datasets through data analytics and acquisition at the highest degree of automation. The utilization of complex ML and deep learning approaches is feasible for drones. Moreover, the neural network in drones helps computer vision to detect objects with a high level of accuracy and in-built computerized programming such as GPS, sensors, cameras, navigation systems, and programmable controllers are the technological equipment an autonomous drone requires for automated flights.
Types of Learning in AI Systems
A key to understanding DoD’s adoption of AI capabilities is defining the different types of learning utilized by AI. Learning models in AI systems can be classified into interactive types that AI utilizes in the design and development process for the real-world applications of AI systems today that are curated below:
- Machine Learning: Machine learning is a sub-field of AI that is more reliant on human involvement for learning from vast volumes of datasets. To define, machine learning is the study of computer programs that leverage statistical models and algorithms without being explicitly programmed to evolve with each iteration (Allen, 2020). This application uses computer algorithms to train computers utilizing intelligence so that computers and digital devices can perform as human. Machine Learning algorithms have the capability to update themselves through three prominent AI-based capabilities which are as follows:
-
- Artificial Narrow Intelligence (ANI) is the “weak” capability of AI that exists in this world of technology today. It is considered “weak” because it is programmed to perform only one task at a time in the real-time block such as investigating the data to write a scholarly article. However, ANI capabilities can also pull information outside of the sole task they are assigned and designed to perform from the specific Narrow AI is designed to perform specific singular tasks it is programmed to do. However, in expert systems, ANI involves a symbiotic relationship between man and machines for knowledge representations and learning methods to mimic the way humans think.
- Artificial General Intelligence “Artificial General Intelligence” or AGI is also known as Deep AGI which has “strong” AI-based capabilities that emulate the human mind’s intelligence and mimic human behavior for solving varied problems in an effective and efficient manner. This network gives the device the ability to process information, learn human intelligence, and apply the learned approaches for contextual actions. Chatbots are examples of deep AGI that are basically the hypothetical intelligence of machines that are being taught to machines through multiple layers of near-simultaneous decision-making. Artificial General Intelligence or AGI in expert systems refers to the machines that realize true human-like intelligence when they are ready to be expected to solve problems, be imaginative, and make judgments while being artistic and ready to reason.
- Artificial Super Intelligence or ASI in deep learning is capable of surpassing human intelligence. Many technological researchers believe that an advanced form of super AI systems can lead to global catastrophe for humanity because of its technological advancement to surpass human mind and intelligence.
- Deep Learning: Deep learning is also a field of AI but the sub-field of machine learning here “deep” refers to a neural network algorithm where deep learning eliminates several human interventions and automates feature extraction piece of the process that enables the handling of larger datasets. Deep machine learning, also known as Scalable Machine Learning, leverages labeled datasets to ingest unstructured data in its raw form. This works on the principle of supervised learning to determine the hierarchy of features that do not require human interventions to process data. Moreover, supervised learning to inform the algorithm of deep learning allows the users to scale machine learning in more interactive ways (Allen, 2020).
- Neural Networks, also known as “Artificial Neural Networks (ANNs)”, have the ability to mimic the human brain and intelligence through a set of algorithms. It is the system of artificial nodes that are feed-forward in coherence with the actual brains of human beings. In simple words, these networks can flow in only one direction from input to output to mimic the intelligence of humans and other animals. The system can also backpropagate which allows adjusting the algorithm in the opposite direction from output to input while calculating and attributing the error associated with each node of the neuron (Allen,2020).
- Natural Language Processing: Natural language processing or NLP enables devices to process human language using other technologies such as deep learning models, machine learning, and computational and statistical linguistics. There are a few applications of NLP that include chatbots, prediction, speech recognition, and voice assistants which are the main components of voice assistants. Chatbots are another application of NLP that have pre-programmed answering systems to respond appropriately to users’ requests following specific patterns and rules while answering the questions that a user may have. Prediction, on the other hand, is the software that is one of the largest uses of natural language processing systems where a device suggests automatic prediction of the words that are being typed. The journey of understanding and recognizing the natural human language is where deep learning techniques come into the picture that focuses on improving new fronts for the interaction between machines and humans through text and speech (Reshamwala et al., 2013).
- Knowledge-based Expert Systems: The knowledge-based expert system is an information system that normally would require human intelligence for problem-solving purposes. It comprises knowledge as data to help in problem-solving through drawing inferences from the knowledge base to involve the relationship between reasoning, knowledge representation, and learning models to build AI agents that would be capable of gaining insights into human-level intelligence.
-
AI Challenges for the DoD
Currently, AI-based systems are not capable of achieving the theory of mind or self-awareness instead researchers are using reactive machines and limited memory to push the envelope of what AI and machine learning can do. As these capabilities are integrated into weapon systems the department is facing several challenges including:
Bias
One of AI’s main challenges stems from bias due to bad privacy and ethics in different paradigms. The Defense Innovation Board (DIB) notes “As with all new technologies–rigorous work is needed to ensure new tools are used responsibly and ethically” (Defense Innovation Boar (DIB), 2023). This practice is imperative to maintain strategic and technological advantage. Bias in AI refers to the tendency of algorithms to produce results that are prejudiced, inaccurate, or unfair. The United States Department of Defense (DoD) has issued guidance for managing bias in DoD AI. This guidance outlines specific steps for eliminating bias in the development, deployment, and use of AI systems. It recommends that DoD personnel identify and mitigate bias in AI systems by developing and implementing policies and procedures for assessing, detecting, and addressing bias. Additionally, the guidance encourages DoD personnel to collaborate with external stakeholders to understand the potential implications of bias in AI. Finally, the guidance outlines best practices for ensuring the fairness, accuracy, and transparency of AI systems in order to ensure that all DoD personnel have access to trustworthy and reliable AI systems (Defense Innovation Board, 2923).
Privacy
Privacy by design is a methodology embedded into the products that utilize AI throughout their developmental process. This methodology ensures that advanced products have embedded privacy principles that look for ways in which the personal information of individuals can be protected. Embedding privacy into technology poses several restrictions on how data is used, managed, transferred, and stored. AI and data ethics deal with payroll information about an individual or an organization that includes names, addresses, employees’ codes, wages, and benefits that a person can anonymize to control who has access to personal data. In AI, privacy is at the core of what users and manufacturers do to secure the data. The power of the proliferation of virtual technologies presents certain ethical challenges which AI ensures to manage when it comes to handling, using, and sharing data. Digital privacy and ethics are all about fairness, integrity, and accountability in the embedded systems that focus on implementing high standards of data protection to see where there are gaps to be closed and biases to be mitigated (Felzmann et al., 2020).
Responsibility and accountability
The highly autonomous behavior of AI-enabled systems such as robots and autonomous vehicles, for which neither the operator nor the programmer seems to be responsible for the potential threats such autonomous systems can cause, has been suspected to enhance gaps in responsibility and accountability. These responsibility and accountability gaps can incentivize harmful behavior due to the system’s self-sufficiency to function autonomously from human interference in decision-making and management processes. These gaps occur in AI-based systems because machines do not get concrete human instructions. This cancels people’s responsibility and creates a “gap” in responsibility not because of the absence of someone but because of the discrepancy and lack of accountability. Moreover, the capability gap supports war fighters that prevent the AI from achieving the desired goals that need to rectify the strategy and performance. The capability gap blends warfighter expertise with quantifiable statistical methods through a scientific approach. These gaps compare the objective requirements and threshold assessing the potential solutions with the warfighter-identified optimal and good-enough values.
Data Poisoning
Data poisoning, an adversarial attack type, is a manipulation in which hackers tamper with or manipulate algorithms and training datasets of Machine Learning (ML) to cause AI-embedded models to become with unwanted outcomes and fail during inference. It is considered an integrity manipulation or attack because polluting a machine’s learning model impacts its ability to produce accurate predictions that evade AI models’ classification. The goal of attackers while tampering with AI and ML models is to generate incorrect, undesirable, and harmful outcomes that can control and restrict the prediction behavior of a trained model. In this specific type of attack, Artificial Intelligence and Machine Learning (ML) models are at high risk because they work on the principles of what humans teach them. These attacks are oftentimes challenging and very time-consuming to spot thus causing extensive damage. As AI models operate according to the data in order to make accurate predictions, the ‘poisoned’ algorithm generates incorrect predictions.
Processing Infrastructure
Artificial Learning covers a range of techniques and applications including Deep Learning and Machine Learning from analytics capable to predict the future performance of the AI models to image recognition and natural language processing. The most common infrastructure approaches for AI model processing involve servers with GPUs, Specialized AI systems, hardware accelerators, and processing storage for AI where a huge volume of data can be stored and fed into the systems to keep AI models’ performance up. GPUs have matured and broad software ecosystems as well as neural network stuff that support deep learning tasks for a variety of tasks including analytics and deep learning. Moreover, hardware accelerators such as ASICs and Field-programmable Gate Arrays (FPGAs) crammed with logic blocks accelerate the hardware performance of the AI models by configuring and reconfiguring the essential hardware chips.
Testing, Evaluation, Verification and Validation of AI Systems
Socio-technical approaches have an important part in enhancing the performance of AI-assisted models through testing, evaluation, verification, and validation approaches. To achieve complex TEVV bias in the systems assisted by machine learning and AI, mass customization of AI models is used to ensure the effective interaction between machines and humans. Socio-Technical approaches mitigate the bias in the AI models and establish a new culture change that enhances the performance and ensures the adaptability of new technologies in any industry.
AI’s Impact on Software Assurance
The adoption of AI creates both the opportunity to enhance software assurance capabilities and the obligation to protect our system software from new adversarial threats. AI can be used to develop algorithms that detect potential security vulnerabilities, uncover malicious code, and identify suspicious behaviors. AI also presents new technological challenges that may alter the implementation of software assurance for a system. Organizations across the department are making advancements in the protection of DoD software and software assurance best practices through AI adoption.
AI enhancements for Software Assurance
Vulnerability identification and Prioritization
AI can be used to improve the accuracy of software vulnerability assessments. AI-driven solutions can analyze both source code and compiled binaries, helping to identify potential security flaws and vulnerabilities faster and more accurately than ever before.
Malware Detection
AI-enabled solutions can also enable the DoD to detect malicious software and develop new cyber defense capabilities. AI can be used to detect and respond to cyber threats in near real-time, helping to protect DoD networks, systems, and data from malicious actors.
Continuous Monitoring and Response
Continuous monitoring and response leverage control transformations and allow focus on the driving growth of the AI and Machine Learning models. This offers SwA immense clarity and centralizes operations with highly automated analytics and optimal resource allocation for driving efficiency.
Software Development and Testing
AI can be used to optimize software development and testing processes. AI-driven solutions can automate code reviews and other manual processes, helping to reduce the amount of time needed to develop and test software.
Data Sharing
More access to relevant data sharing is crucial for AI development as more data would help predict algorithms with higher accuracy and SwA implementation in compliance with machine learning and deep learning. The reason why there would be more effective data sharing is that Artificial Intelligence systems will have better and more effective access to relevant datasets.
Analysis of Adversarial Techniques
Adversarial techniques in AI models cause the algorithms in machine learning to make a false or inaccurate prediction with unwanted outcomes. One of the major classifications in the use of machine learning and deep learning methods is data augmentation which can alone improve adversarial training in Artificial Intelligence and Machine Learning models.
AI Impacts on Software Protections
Increased rigor for critical AI capabilities
Oftentimes, initiatives of AI fail due to unfamiliar paradigms, security breaches, trust issues, and a lack of best practices in terms of software engineering. To mitigate these problems, software assurance can help develop and apply a unique set of policies to keep software configurations safe, protected, and up to date. Software Assurance, in this regard, is the confidence that is inserted as part of the AI system in a product which is free of known exploitable vulnerabilities. In the discipline of AI alongside the field of computer science, the code quality and solid engineering to design, build, and deploy AI systems are difficult but are essential for the success of responsible AI. Software assurance transforms an engineering effort into a practical application by conducting in-depth assessments of AI processes and software that provide insight into the maturity of software and solution that support robust legal and ethical compliance. The assurance (SwA) also offers improvement guidance to manage testability and changeability for the continuous transparency of AI systems development (Freeman et al., 2022).
Software Assurance (SwA) strengthens the processes and methods of AI empowering the software industry with the latest technologies to deliver a faster time to the emerging market of technology and a better customer experience. SwA expedites the testing processes of the software to perform high-level quality checks on the products based on AI systems (Freeman et al., 2022). It is the most promising technology that will help the embedded AI-based systems to overcome varied challenges of privacy, implementation strategies, and adaptable algorithms in AI specifically in machine learning. Software Quality Assurance detects errors and bugs instantly in real-time AI systems and makes it easy for any software to leverage AI in its processes.
AI is driving improved efficiency due to the evolving chatbots and machine learning in the field of technology. Several technological implementations such as chatbots, biometrics, deep learning platform, cyber defense, robotic process automation, image recognition, and decision management support the secure use of technology and rocking the world through logic and effective decision (U.S.G.A, 2022). Chatbots interact with humans and biometrics making sure that technology can identify and analyze attributes and features of the human body. Robotic process automation furthers the security game as it mimics the human process using scripts that are fed to a machine to complete its automation task effectively. Moreover, cyber defense acts as a firewall that provides timely support to prevent any threat and if found fight against it so that it cannot affect the infrastructure of the AI-based systems. Image recognition is another security measurement that recognizes and distinguishes a trait in a virtual image that increases engagement and improves security and performance.
Automated Security Best Practices for AI
The level of cognitive automation security practices in AI-based products is so high that developers and manufacturers of those systems need not be afraid of security or privacy system leaks. Automated security is the core component of digital transformation initiatives throughout the world to improve compliance, quality, accuracy, and productivity of data sets that are used to empower machines that imitate human intelligence. Best practices for AI automated security are defined as operational processes that are secure and most effective. Traditional software attack vectors are critical to addressing, but security problems need to be mitigated to fight an uphill battle against AI adversaries (Brundage et al., 2018). Firstly, operations foundation and secure development must incorporate discretion, authentication, and resilience when protecting AI and the datasets under AI control. AI must-have built-in forensic capabilities in all systems to recognize bias in AI’s interaction with humans that would help identify and resolve root causes of the biasin machine learning and deep learning. Moreover, AI must be capable of discerning and recognizing maliciously-introduced datasets to safeguard sensitive information and resolve complex problems in machine learning (Barreno et al., 2010).
Software Assurance (SwA) organizes and incorporates multiple artifacts with the purpose of making, establishing, and developing software systems in AI products to foster the processes of development in a manageable, secure, and intelligent manner. The engineering set of SwA utilizes different tools to form an idea regarding evolving quality of the artifact sets that include tools for design, implementation, development, and management sets. Firstly, to engineer design models, visual tools with different levels of abstraction are used that mostly include software architecture descriptions, design models, and test models. Tools that are used for design sets include Unified Modeling Language (UML) notations that visualize the way a system is designed. Secondly, test management tools for software assurance are used which include code analyzers, debuggers, and compilers that contain executable source code for standalone testing of AI components such as form and interface. Thirdly, network management tools are used for the development of SwA that include tools for test automation and coverage. These tool sets for the development artifacts contain installation scripts and ML notations to use the AI product in its environment where it is supposed to be used. Lastly, the management set uses various artifacts such as software development plan, work breakdown structure, deployment of the business case, and environment where the AI product is expected to be used.
Bias Mitigation
The systemic bias which is a difficult bias to be detected in the AI data collection stage if found can easily be mitigated by maintaining a good understanding of the hardware and coding that are used in the production of the AI system at hand. Moreover, responsible licensing practices can be adopted to prevent high-risk AI-based models from being leveraged to prohibit the harmful use of AI for mass surveillance. Open-sourced AI software can be adopted to limit the use of potentially irresponsible and harmful AI systems through voluntary licensing framework tools by the individual developer of the model.
Infrastructure protections
As software architectures continue to modernize to utilize the capabilities such as AI, the DoD must ensure the potential risk associated with additional attack surfaces is appropriately mitigated.
Automated software Analysis through AI
AI research is proving the benefits of using AI to analyze software for functional and complexity defects. In order to maintain a level of assurance for software development at scale, AI software analysis capabilities must also include common software defects which contribute to vulnerabilities.
Data poisoning through software
Decision Automation
Augmenting the decision automation support through human intellect, industry leaders refer to a way of life in an integrated mean where a man’s capability to derive effective and relevant solutions for problems is enhanced through high-powered electronic aids and sophisticated streamlined methods. Therefore, the idea of augmenting a new path to the decision is not about machines working autonomously alongside humans but the concept of machines replacing humans in almost every field of life.
Existing DoD efforts
-
-
- DEVCOM C5ISR – AI support for vulnerability analysis
- Sandia NL – Bifrost / Hardware
-
SwA Recommendations for the Future of AI
AI operating advancements have shaped the future of humanity across nearly every field and industry around the globe. The pace of evolution in this era due to AI-enabled technologies and systems is faster than ever because of robotics and big data mimicking human intelligence in all cognitive tasks. The advancements and innovations due to AI are diversified in almost all industries from healthcare to education, military, finance, and cybersecurity. The systems assisting through AI, machine learning, and deep learning can improve the quality of life to a great extent acting as a technological innovator for the foreseeable future. However, AI-assisted systems can be a curse for humanity if it reaches the wrong hands that can use technological advancements for accomplishing their own nefarious designs.
The major problem which confronts software assurance professionals in the world of Artificial Intelligence is how to stay ahead of the vulnerabilities that can disrupt AI-based models. To cope with this ever-lurking threat, agencies throughout the US army and the professionals in the U.S. Army Communications–Electronics Command’s Software Engineering Center created a department, DA PAM. The collaborative endeavor of SwA DA PAM leading an Integrated Product Team will help provide clear guidance to the Army regarding the implementation of software assurance and compliance towards fulfilling DoD directives to ensure validation and achievability for SwA implementation. This endeavor will align existing agencies and Army readiness while addressing vulnerabilities in the early development of AI models, supplementing source code scanning, and integrating continuous feedback mechanisms regarding the implementation of SwA DA PAM(New DA PAM Guides Software Assurance Efforts, n.d.).
The development of the world depends on the quality of technological advancements that can boost the economic, financial, and security of global systems. The future of AI is going to transform the world through automation and robots surpassing an individual’s ability and personality where even industries do not need skilled laborers. AI has taken the central stage since the emergence of model-and-algorithm-based machine learning that has the ability to harness learned intelligence and massive amounts of data for making optimal discoveries in the field of technology. Thus, AI will remain in the spotlight for the foreseeable future.
References
Allen, G. (2020). Understanding AI technology. Joint Artificial Intelligence Center (JAIC) The Pentagon United States.
Barreno, M., Nelson, B., Joseph, A. D., &Tygar, J. D. (2010). The security of machine learning. Machine Learning, 81(2), 121–148.
Brachman, R. J. (2005). Getting back to” the very idea”. AI Magazine, 26(4), 48–48.
Choudhury, P., Starr, E., & Agarwal, R. (2020). Machine learning and human capital complementarities: Experimental evidence on bias mitigation. Strategic Management Journal, 41(8), 1381–1411.
Felzmann, H., Fosch-Villaronga, E., Lutz, C., &Tamò-Larrieux, A. (2020). Towards Transparency by Design for Artificial Intelligence. Science and Engineering Ethics, 26(6), 3333–3361. https://doi.org/10.1007/s11948-020-00276-4
Freedberg Jr, S. J. (2015). Centaur Army: Bob Work, Robotics, and the Third Offset Strategy. Breaking Defense, 9.
Freeman, L., Batarseh, F. A., Kuhn, D. R., Raunak, M. S., &Kacker, R. N. (2022). The Path to a Consensus on Artificial Intelligence Assurance. Computer, 55(3), 82–86.
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Magazine, 27(4), Article 4. https://doi.org/10.1609/aimag.v27i4.1904
Mattis, J. (2018). Summary of the 2018 national defense strategy of the United States of America. Department of Defense Washington United States.
McCarthy, J. (2004). What is artificial intelligence. URL: Http://Www-Formal. Stanford. Edu/Jmc/Whatisai. Html.
New DA PAM guides software assurance efforts. (n.d.). Www.Army.Mil. Retrieved April 4, 2023, from https://www.army.mil/article/251544/new_da_pam_guides_software_assurance_efforts
Newell, A., & Simon, H. A. (2007). Computer science as empirical inquiry: Symbols and search. In ACM Turing award lectures (p. 1975).
Oxford Languages | The Home of Language Data. (n.d.). Retrieved December 23, 2022, from https://languages.oup.com/
Reshamwala, A., Mishra, D., & Pawar, P. (2013). REVIEW ON NATURAL LANGUAGE PROCESSING. IRACST – Engineering Science and Technology: An International Journal (ESTIJ), 3, 113–116.
Simon, H. A. (1983). Why should machines learn? In Machine learning (pp. 25–37). Elsevier.
Venegas-Andraca, S., & Bose, S. (2003). Quantum Computation and Image Processing: New Trends in Artificial Intelligence.
U.S.G.A. Artificial Intelligence: DOD Should Improve Strategies, Inventory Process, and Collaboration Guidance. Retrieved December 28, 2022, from https://www.gao.gov/products/gao-22-105834
THORNBERRY, W. M. M. (2021). NATIONAL DEFENSE AUTHORIZATION ACT FOR FISCAL YEAR 2021. PUBLIC LAW, 116, 283
Cite This Work
To export a reference to this article please select a referencing stye below: