Resilient Systems

Resilient Systems

Resilient Systems

AI obstacle.

Two issues with advancing AI usage are the lack of extensive knowledge base capabilities, and the ability to learn. The element of complexity appears to be that an AI system must retain a vast spectrum of knowledge and traits that allow it to write programs for specific conditions it cannot answer. AI experts note that problems with artificial intelligence and involved programming technology are so complex that algorithms and standard programming methods are insufficient to solve them (Sammet, 1971). Although this argument bares truth, it does not detail the advances in AI. Other opponents or arguments against AI have noted problems pertaining to memory capacity and order. Advance knowledge of information storage requirements and memory organization infers that programs need flexibility (Simon & Newell, 1964). These perceptions introduce a state of stagnation with AI. Artificial Intelligence (AI) advancement made ground early on but has had less concentration and research because of the impacts of these observations and the belief in the condition of system restriction. Expert systems are unreliable when confronting problems outside their respective areas and, therefore, may provide incorrect answers in those situations (Nilsson, p.407). Because the sciences of molecular biology and neuroscience still lack comprehension of the physical mechanisms responsible for human cognitive function, AI may remain restricted until more revelation of those fields of knowledge (Moravec, 2009). Experts in the field determine a successful AI system will be able to pass the Turing test while others argue behavior testing proves no cognitive skill (Vincent, 2014).

Early on, Information Processing Languages (IPL) and algebraic languages received criticism for those conditions related to the very same ideals (Simon & Newell, 1964). The ability to construct ICS’s capable of handling the adversity presented by their connection to the Internet depends upon a constant dedication to AI programs. Future perception for AI should focus on the singular purpose of a required task, similar to methods of today’s software designs. Adoption of heuristic concepts in programs can offer future acuity for AI advancement. Chess programs match human ability, exceed checkmating amalgamations, and show identification of human problem solving abilities for provision of “means-end analysis” required of theorization and formation in computational processes (Simon & Newell, 1964). Research and investigation of heuristic concepts is what allowed AI to reach its current status. Effort must be made to teach programs to learn from incident and write code for itself. AI must bridge this gap to achieve benefit for IC’s. Precepts that AI achieved its roots due to upheaval against limitations in present fields have caused regression in its advance (Russell & Norvig, 2010). Such regression is isolating information security from defensive constructs required by ICS’s. As noted by David McAllester, automated reasoning is inaccessible from proper procedures and fixed analysis (Russell & Norvig, 2010). Overcoming the regressions and limitations found within AI is conceivable with resilient control applications. Such an approach does not leave ICS defense to reactive response, rather provisional of proactive measures (Rieger, Gertman, & McQueen, 2009).

Resilient control systems.

Resolving the current enigma within Critical Infrastructure (CI) depends on resilient designs. Designs of current systems depend on operator reprogramming and or repair after the fact. By designing systems that consider all threats and measures, the problems confronted in CI can be alleviated (Rieger et al., 2009). The dated definition of resilience fails to consider the current state of security for CI. The ideology of organizational and information technology in association with resilient systems are problematic. A terminology that suggests systems can tolerate fluctuations to their structure and parameters fails to account for malicious deeds (Rieger et al., 2009). One alternative is the idea of Known Secure Sensor Measurement (KSSM). “The main hypothesis of the KSSM concept is the idea that a small subset of sensor measurements, which are known to be secure (i.e. cannot be falsified in the physical layer), has the potential to significantly improve the observation of adversarial process manipulation due to cyber-attack” (Linda, Manic, & McQueen 2012, p.4).

Resilient systems should be able to determine uncertainties, sense inaccuracies under all conditions, take preventive action, recover from failures, and mitigate incident beyond design constraints (Yang & Syndor, n.d.). Valid resilience considers representations of proper operation within process applications when facing varying conditions inclusive of malicious actors and includes state awareness within the resilient design (Rieger et al., 2009). System resilience in current CI architectures is dependent on human reaction and analogy. While human capability delivers sound heuristics and analogy, certain situations can arise connected to fatigue, stress, or other human deficiencies that affect decision-making quality (Rieger et al., 2009). Further complexities are relevant in the use of digital technology. Breadth of information for operator response and the automated versus human manipulated inputs or combinations thereof present complex interactions that leads to a lack of clarity in dependencies and rules (Rieger et al., 2009). True resilience requires a system to function with a comprehension of these variables. A resilient control system will be error tolerant and complement the system with perception, fusion, and decision-making (Rieger et al., 2009). Prediction of failure has been successful where systems employ fault detection, diagnostics, and prognostics (Yang & Syndor, n.d.).

Cyber awareness.

Awareness in the cyber domain intended governing to happen through risk assessment. Only forensic evaluation after the fact truly indicates the actual cause of an abnormal event (Rieger et al., 2009). Predictability in determining critical digital assets is difficult to impossible in regard of hidden dependencies (Langner & Peterson, 2013). The intellectual aptitudes of malicious actors improve through the usage of stochastic methods whereby variability of motive and objective exist (Rieger et al., 2009).  ut simply, risk management allows no technical review of potential risk and is really a business tool (Langner & Pederson, 2013). A huge misnomer resides in the condition that risk mitigation will allow defensive metric implementation in ample time. Cross-reference this with CI and the idea is flawed. Rapid reconfiguration in these environments is not a possibility; due to their design, the probability of mitigation is near impossible (Langner & Pederson, 2013). Though routine and common pattern analysis may provide anomaly comparisons, its limitations in predicting an adversary’s behavior is minimally effective (Rieger et al., 2009). The three principles provided in Table 2 should be the basis for policy and the way forward.

Table 2 Basis for Policy

Table 2
Basis for Policy

Viewing CI from as a political issue is precedent. Fixing design vulnerabilities should be paramount and should not include hypothetical solutions or assessments. Though CI is vital the security of the cyber domain should be viewed unilaterally. A resilient control system must have the capability to counteract malicious attack because such systems are digitally based and thereby more vulnerable (Rieger et al., 2009). Business logic values risk-taking over resource spending and since critical asset owners often find a rationale for doing nothing when surveying systems with risk management, they are quite happy with the expenditure required- nothing (Langner & Peterson, 2013). Such practice negates improvement for CI and the ability to build resilient systems.

Data fusion.

Gathering a scope on data integration presents the obstacles that restrict AI in CI environments. The consumption of data determines information generation and appropriate judgment of that information (Rieger et al., 2009). Implementation of data fusion through a centralized application can accomplish reasoning of heterogeneous information and allow adequate countermeasures to be triggered (Flammini, Gaglione, Mazzocca, Moscato, & Pragliola, 2008). Insight on the definition of data fusion aids in the comprehension of its inclusion. Valid effects demonstrated through experimental process on simulated SCADA systems prove autonomous sensory agents report successfully to a central processor to fuse evidence from physical and virtual dimensions to provide a unified view of the system. (Genge, Siarerlis, & Karopoulos, 2013). Synthesizing raw data from multiple sources allows generation of better information (BusinessDictionary.com, 2014). Data fusion can help with areas of AI recently found restrictive. The attributes connected to data fusion principles provide the related potential. AIIC will understand how to sift through data and reduce nonessential information. The principle of identification gives AIIC validation ability. The knowledge base will continue to progress over time through usage of the improved characterization and knowledge principle. Data fusion provides attributes whereby AI can advance CI security as seen in Table 3.

Table 3 Principles of Critical Infrastructure Security

Table 3
Principles of Critical Infrastructure Security

The usage of data fusion has the ability to improve alert and proactive measures for CI systems. Use in ICS’s can achieve robust structures whereby they combine fusion and analysis; security and privacy; and collaboration and information sharing (Informatica, 2013). Such technology is beginning to provide large quantities of data. Interconnected industry delivers vast information pools from perimeter and network security systems (Informatica, 2013). Achieving robust systems starts with enabling a network to sense attacks and indicates such with warning indices through supported data fusion, accomplished through the management of intrusions, misuse, anomaly detection, diagnostics, and pattern analysis (Chairman of the Joint Chief of Staff, 2012).

Simplifying intelligible design.

Humanity is absent assurance of creation, whereby speculation between theology and evolution govern the debate of intelligible design (Grinnell, 2009). Without focusing on the former, the intent is to encourage simple intelligible designs. Success with complex intelligible AI designs becomes exhausting and it taxes human innovation due to the difficulties of mimicking the neuro physics of the human brain (Salamon, Rayhawk, & Kramar, 2010). Such complication indicates the direction where AI is more productive. It is still speculative as to whether computers enable learning more effectively rather than traditional programming techniques (Keller, 2013). There is belief, however, that an artificial general intelligence (AGI) system, given correct rudimentary basic abilities, could learn perceptual patterns through different sensory perception among diverse environmental contexts (Voss, 2002). Debate revisits the premises that AI cannot advance without a connected neuro
network, sensory perception ability, and an encoded knowledge base (Hayes, 2012).

Adaptive and flexible attributes within AI entities have achieved conceptual goals, though efficiency across the broad spectrum of human knowledge and self-awareness is elusive (Voss, 2002). Since ICS security is a specific goal, AI should prove beneficial. A lack of further development may find explanation in the intent to achieve simulated consciousness and its detachment from the cognitive continuum (Gerlenter, 2007). Advancement for AI may reach accomplishment through singular purpose concepts such as the success of Vicarious’ recursive cortical network in passing the Captcha test (Johnson, 2013). Vicarious’ work to skip past human brain emulation is where it associates this achievement. Broad-spectrum knowledge depiction requires ontology to link the variable domains of knowledge (Russell & Norvig, 2010). Ideas adapted around flexible modular frameworks tailored to specific networks could achieve target goals and help to grow the knowledge base needs of AI advancement (Sowa, 2002). Involving fuzzy logic, an AI subset designed to capture expert knowledge and perform decision-making, is suitable for nonlinear systems such as ICS’s (Dingle, 2011). While science’s current attempts are not effective in creating conscious intelligence this should not restrict those innovations and continual advancements in unconscious AI programs (Gerlenter, 2007). For the purposes of further clarity and comprehension, the concept of unconscious AI relates to a system that is not self-aware.