Internet of Things (IOT) Research Paper
Enhanced technological developments and real improvements to Internet conventions and registering frameworks have made communication between various gadgets simpler than at any other time. As indicated by different figures, around 50 billion gadgets are relied upon to be associated with the Internet by 2020. This has offered to ascend to the recently developed idea of the Internet of Things (IoT). IoT is a blend of inserted technologies with respect to wired and remote communications, sensor and actuator gadgets, and the physical items associated with the Internet . One of the long-standing goals of processing is to rearrange and advance human exercises and encounters. IoT needs information to either speak to better administrations to clients or improve IoT structure execution to achieve this intelligently. In this way, frameworks ought to have the capacity to get crude information from various assets over the network and break down this data to remove learning.
Since IoT will be among the best wellsprings of new information, information science will make an incredible commitment to making IoT applications more intelligent. Information science is the blend of various fields of sciences that utilizations information mining, machine learning, and different techniques to discover designs and new experiences from the information. These techniques incorporate an expansive scope of algorithms relevant to various areas. The way toward applying information examination techniques to particular regions includes characterizing information composed, for example, volume, assortment, velocity; information models, for example, neural networks, characterization, clustering strategies, and applying productive algorithms that match with the information attributes .
Since the information is the premise of extracted and abstract knowledge, it is essential to have high-level information. This condition can influence the exactness of knowledge extraction in an immediate way. Since IoT delivers high volume, quick velocity, and assortments of information, protecting the information quality is a hard and testing undertaking. Albeit numerous arrangements have been and are being introduced to take care of these issues, none of them can deal with all parts of information attributes in a precise way as a result of the dispersed idea of Big Data administration arrangements and ongoing preparation stages. The deliberation of IoT information is low; that is, the information that originates from various assets in IoT is generally crude information and not sufficiently adequate for analysis. A wide assortment of arrangements is proposed, while the majority of them require to assist upgrades. For example, semantic technologies tend to upgrade the reflection of IoT information through explanation algorithms, while they require more endeavours to conquer its velocity and volume.
As indicated by the smart information qualities, analytic algorithms ought to have the capacity to deal with Big Data; that is, IoT needs algorithms that can analyze the information which originates from an assortment of sources progressively. Numerous endeavours are made to address this issue. For instance, profound learning algorithms and revolutionized types of neural networks can reach a high exactness rate in the event that they have enough information and time . Profound learning algorithms can be effectively influenced by the smart boisterous information, besides, neural network-based algorithms need interpretation, this is, information researchers can not comprehend the explanations behind the model outcomes . In a similar way, semi-administered algorithms which show a little measure of named information with a lot of unlabeled information can help IoT information analytics also.
Decision Support System (DSS)
We are investigating and analyzing Decision Making Software (DSSs) that are particularly used by business organizations that demand Business Intelligence (BI) and analytical capabilities within their DSS . The reason for choosing this category of DSS systems is the rapidly increasing globalization that has expanded the horizons of organizations making the decision-making process more difficult and complicated. For this, an efficient and effective DSS system is needed that is not only capable of supporting the decision-making process but will also help in parameter analysis and optimization. Furthermore, the DSS must also have certain value-added services that make the understanding of the output more convenient.
Several vendors are producing DSS products for businesses and organizations. However, most of the DSS are custom and tailored and made by popular software houses on the requests of organizations. Some of the common and popularly known vendors are Actuate; BEAS Systems; Datanautics; IBM, Hyperion, Microsoft, My SQL AB, etc.
Narrowing down the search, we have chosen the ‘Decision Support Systems Software’ created by Vanguard Software Corporation. Vanguard is very confident about the capabilities of its product and states that “Vanguard’s decision support system software makes it possible for you to apply decision analysis techniques throughout your organization to problems ranging from simple projects to enterprise-wide strategic plans” (“VanguardSw”). The software is particularly used for business forecasts and planning while having extensive analytical and optimization capabilities making effective use of Business Intelligence. Furthermore, the software is also capable of producing Time–Series Forecasting, Monte Carlo Simulation, Optimization of Decision choices, business analytics, and forecasting components. We believe that the combination of these features and functions makes this software a very effective and efficient business software.
Using the Vanguard DSS is not only very simple and effective but also very convent for businesses that believe in collaborative and integrated decision-making procedures. The software presents the output after performing a classical function of Decision Tree Analysis and Markov Simulations (“VanguardSw”). And then present the result in the graphical format. Then the results are presented in the form of a highly effective and efficient Monte Carlo Simulation. Other options for viewing results are Excel. The Macros of the model can also be trained and programmed using simple programming languages such as C or Pascal (“VanguardSw”). These models ensure accuracy, efficiency, and speedy output for the results. Furthermore, it also has integrated Artificial Intelligence components in the form of the Expert System (ES) technology that helps in the optimization and automation of routine and general decisions (“VanguardSw”).
Overall, the chosen DSS is very efficient and has been chosen because of the following features and benefits being offered:
- It allows and supports collaboration by allowing several individuals to contribute their knowledge and then combining that knowledge base (KB) to formulate decision models and simulations;
- Interpretation of Data: the software is capable of predicting and forecasting on the basis of in-depth interpretation of the given data. It does not only read data superficially. Instead, it interprets the data extensively.
- Web-based support is also offered by the systems that allow the creation of interactive web reports and shareable models.
- Integration of the system is easy, with the least compatibility issues for any system or server. It can also be supported over networking.
As already stated, the market demand for such analytical software that is highly efficient in Business Intelligence is quite high due to the rapid globalization of the economy and businesses (Rodriguez, Daniel, Casati & Cappiello, 2010).
Development of Expert System
An Expert System (ES) is an intelligent PC-based decision apparatus that utilizations the two actualities and heuristics to take care of troublesome decision issues based on knowledge gained from an expert . By definition, an ES is a program that recreates the manner of thinking of a human expert to take care of complex decision issues in a particular space. The development of ESs is required to proceed for quite a while. With the proceeding with development, numerous new and energizing applications will rise. An ES works as an intuitive system that reacts to questions, requests illumination, makes proposals, and for the most part helps the decision-production process. ESs give expert counsel and direction in a wide assortment of exercises, from PC conclusion to fragile restorative surgery.
Knowledge base (KB)
A KB is the core of the ES structure. The KB might be a particular demonstrative KB gathered by a counselling firm, and the end client may supply the issue information . KB isn’t a database. The customary information base condition manages information that has a static connection between the components in the issue area. Data and information engineers, who interpret the data and information of genuine human experts into tenets and procedures, make it. These guidelines and techniques can change contingent upon the predominant issue situation. The KB gives the ES the ability to prescribe bearings for client requests. It is typically put away as far as if-then principles.
The KB of ESs contains both genuine and heuristic data and information. Verifiable data and information is the data and information of the assignment space that is generally shared, regularly found in reading material or diaries, and usually settled upon by those data and knowledge in the particular field .
Heuristic data and information is the less thorough, more experiential, more judgmental data and information of execution. As opposed to real data and information, heuristic data and information is once in a while examined and is to a great extent, individualistic. It is the data and information of the good practice, trustworthiness, and conceivable thinking in the field. The data and information underlie the “art of good speculating.”
Information and Data Engineering
It is the art of planning and building ESs, and data and information engineers are its professionals. As expressed before that, data and information engineering is a connected part of the investigation of artificial intelligence, which, thusly, is a part of software engineering. Today there are two approaches to assemble an ES . They can be worked sans preparation or assembled utilizing a bit of advancement programming known as an “instrument” or a “shell.” Before we talk about these apparatuses, we should quickly examine what data and information engineers do. In spite of the fact that distinctive styles and techniques for data and information engineering exist, the basic approach is the same: a data and information engineer meets and watches a human expert or a gathering of experts and realizes what the experts know and how they dissuade their data and information. The engineer at that point, makes an interpretation of the data and information into a PC usable language and outlines an induction engine, a thinking structure, that uses the data and information suitably. He additionally decides how to incorporate the utilization of indeterminate data and information in the thinking procedure and what sorts of explanation would be valuable to the end client .
Next, the induction engine and offices for speaking to data and information and for clarifying are programmed, and the space data and information are gone into the program piece by piece. It might be that the surmising engine isn’t perfect; the type of data and information portrayal is clumsy for the sort of data and information required for the assignment, and the expert may decide the bits of data and information aren’t right. All these are found and changed as the ES step-by-step pickup capability.
ESs are regularly composed in uncommon programming languages. The utilization of languages like LISP and PROLOG in the improvement of an ES rearranges the coding procedure . The real preferred standpoint of these languages, when contrasted with traditional programming languages, is the straightforwardness of the expansion, elimination, or substitution of new standards and memory administration capacities. The programming languages utilized for ESs have a tendency to work in a way like a conventional discussion. We normally express the commence of an issue as an inquiry; with activities being expressed much as when we verbally answer the inquiry, that is, in a ”natural language” organize . On the off chance that, amid or after a discussion, an ES discovers that a bit of its information or knowledge base is off base or is never again appropriate on the grounds that the issue condition has transformed, it ought to have the capacity to refresh the knowledge base as needs are. This ability would enable the ES to chat in a natural language arrange with either the designers or clients.
An artificial neural network is a system based on the task of biological neural networks, at the end of the day, is an imitating of biological neural system . It is a numerical model or computational model that is roused by the structure or potentially practical parts of biological neural networks. It comprises of an interconnected gathering of artificial neurons and procedures data utilizing a connectionist way to deal with calculation. The advancement of artificial neural networks has been set apart by times of extensive positive thinking and others of thwarted expectation . A sensible appraisal of the capability of artificial neural network endeavors and a portion of the impossible desires which has developed around another and creating subject are scattered. The inquiry of whether to utilize artificial neural networks to tackle a particular issue involves judgment with respect to the fashioner in charge of the undertaking. The neural network would be a reasonable hopeful f or utilize if noteworthy preferences in essential are all things considered as a cost, speed of activity, reliability, simplicity of support, simplicity of beginning advancement, simplicity of organization, and adjustment can be appeared to exist .
As neural network applications are still in the beginning times of advancement, numerous pragmatic problems are probably going to pioneer applications and it won’t be conceivable to rely on set up points of reference as a guide . There will in this manner, be some component of hazard in the decision of a neural network should the application fall flat. The results of choosing to utilize neural networks for new applications should neural networks plot neglect to give the required level of execution would include costs, for example, the loss of time and advancement costs . The hidden purpose behind utilizing an artificial neural network in inclination to other likely strategies for arrangement is that there is a desire that it will have the capacity to give a fast answer for a non-inconsequential issue. Contingent upon the kind of issue being considered, there is regularly agreeable alternative demonstrated strategies equipped for giving a quick appraisal of the circumstance.
In genuine circumstances, there are numerous zones where expertise is required keeping in mind the end goal to achieve the coveted outcome. A large number of these zones of intrigue include an appraisal of relatively consistent tasks, for example, the supervision of a power system working condition or the testing for explosives of things having a place with setting out air voyagers. Work of this kind, when done by individuals, is very tedious and subsequently exhausting components which prompt administrators to consider a heedful settling for the status quo. In the expansion, the cost of keeping an adequately substantial group of prepared staff close by to persistently screen procedures, for example, is amazingly costly. The neural network has the potential in a few circumstances to give an alternative technique which is a tireless, ceaseless, reliable and reasonable swap of individuals for routine work of this kind .
The artificial neural network might be relied on to embrace appropriate designated assignments in a systematic way at a speed which couldn’t be achieved or kept up by human administrators . Such favourable circumstances naturally include improvement costs and likely some ceaseless support costs.
As opposed to ESs, which join a knowledge base, neural networks don’t have such a gathering of data . They should be prepared for a given issue or circumstance with the goal that the weights will then contain the required data. A case of the iterative system important amid preparing is given in the perceptron preparing stream outline. One of the methods for arranging preparing techniques into two classes is whether coordinated preparing or non-coordinated preparing is utilized, and these strategies are currently considered.
A genetic algorithm is a transformative computational-based strategy for the enhanced issue arrangement. Darwin hypothesis is the base for these algorithms and it is the great arrangement among every single accessible arrangement. Development systems were first portrayed by Rechenberg is 1960 and these algorithms were first designed and specified by John Holland in his book. John Koza was the first to do Genetic Programming in 1992 .
Since their presentation, Genetic Algorithms have been utilized to tackle troublesome issues like, Non-deterministic issues and machine learning and for the development of straightforward programs like the advancement of pictures and music. The fundamental preferred standpoint of Genetic Algorithms. over alternate strategies is their parallelism . More people are required for Genetic Algorithm as far as its movement for a hunting space, so there are fewer odds of stuck of Genetic Algorithm in neighbourhood extraordinary simply like another decision-making strategy which are accessible nowadays. Brisk convergence is the primary capacity which Genetic algorithms have in them since they are used to focalize rapidly on an issue’s particular arrangement.
Genetic Algorithm has the scope of arrangements; with the assistance of these arrangements, they can deliver to unknown conditions . Execution of Genetic Algorithms should be possible on gadgets of semiconductors, and furthermore, they can be incorporated with wireless technologies. With the assistance of advanced flag processors and FPGAs, the quick prototyping of biological models should be possible utilizing these algorithms.
A genetic algorithm is an outstanding meta-heuristic propelled from natural multiplication which was initially proposed by Holland in 1975 . From that point forward, the Genetic Algorithm has been utilized by numerous scientists for streamlining of various combinatorial issues. Over a previous couple of years, there have been numerous confirmations to trust that the genetic algorithm could be successfully utilized for some blended whole number nonlinear issues.
There are numerous favourable circumstances for the usage of the Genetic Algorithm strategy contrasted and other meta-heuristics. A genetic Algorithm can utilize an arrangement of introductory arrangements, achievable or infeasible, instead of a solitary one, i.e. parallel preparing. Genetic Algorithms can likewise have the capacity to play out the pursuit in arrangement space to locate the ideal or close to the ideal arrangement. The other preferred standpoint of the Genetic Algorithm is the ability to apply a stochastic procedure rather than a deterministic one as an apparatus to manage the inquiry procedure in a few areas. Another favourable position of the Genetic Algorithm is to have high adaptability to handle issues with a direct and nonlinear goal and limitations. Genetic Algorithms can deal with issues in various types of constant, discrete, and blend, paying little respect to space and measurement. At last, the Genetic Algorithm can utilize the target as an incentive to look with no compelling reason to utilize straight counterparts of nonlinear capacities, adjust the outcomes, or change the discrete factors to a ceaseless one or the other way around.
Issues/ Challenges/ Solutions
Artificial Intelligence is an emerging phenomenon of Information Technology where robots and computer-aided supports are being made so efficient and intelligent that they may be able to replicate the way human beings indulge in decision-making processes. The research and focus on Artificial Intelligence and its adjoining areas has increased over the period of time, and it is expected that proper and efficient development of artificial intelligence can change the way human beings work and process in future. This expectation and rapid development in this field are increasing the risk that robots will soon replace human labour. This can be very destructive for the overall society as with unemployment, social frustration may increase and may trigger global chaos. Several observers have highlighted this threat in their research papers.
The advocates of this technology believe that artificial intelligence has the capability of reducing the workload of human beings by shifting it to automated and robotic supports, thus enhancing its efficiency, effectiveness, precision, and reliability. Various researchers have evaluated and analyzed artificial intelligence in various ways to determine its capacity, capability, reliability, and future processes. Having some criticism and possible threats in the future, it is believed that artificial intelligence has the tendency to overtake the role of human beings and human labour in the future, which can be dangerous for the survival and sustainability of the human race.
Critics call the future of the world the mechanized world (Lin, Abney & Bekey, 2010) since it is expected that most of the processes and procedures will become automated and technological. However, these critics are very much concerned about the issues of this mechanized world as well (Lin, Abney & Bekey, 2010). And this issue is of high concern because “If the evolution of the robotics industry is analogous to that of computers, then we can expect important social and ethical challenges to rise from robotics as well, and attending to them sooner rather than later will likely help mitigate those negative consequences” (Lin, Abney & Bekey, 2010). These issues have to be dealt with great care and focus.
Furthermore, robots are still machines that can make critical and dangerous errors as well on the basis of malfunctioning, wrong analysis, and various other internal and external factors (Lin, Abney & Bekey, 2010). Relying too much on them can be dangerous and can make robots out of control as well (Lin, Abney & Bekey, 2010). For instance, recently, robotic warfare aircraft technology was introduced named drones (Lin, Abney & Bekey, 2010). There were believed to be the most precise unarmed missile launching aircraft. However, their precision is now being specifically argued and questioned (Lin, Abney & Bekey, 2010). This clearly shows that since robots cannot think and evaluate conditions and situations critically, they are not able to make accurate judgments and decision-making as human beings can. Replacing humans with such robots can be disastrous.
The identification and prediction of such issues before time is highly important and essential. Otherwise, the rate of negative impact by these robotics will be similar to the rate of their development (Lin, Abney & Bekey, 2010). And there is every possibility that these negative impacts will outweigh the positive or beneficial impacts of artificial intelligence (Lin, Abney & Bekey, 2010). The authorities and researchers working on the development of artificial intelligence must keep these threats and risks under consideration so that necessary precautionary and control measures can be taken.
Achievement of Singularity and Unbelievably Faster Processing
The advocates of this technology try to justify the replacement of human beings with robots on the basis of efficiency and calculative precision it can achieve and practically establish the phenomenon of singularity. One of the key features of Artificial intelligence is the way it can enhance the productivity, efficiency and precision of the work being performed. According to some researchers, this rapidness and precision will enhance so much in the near future that it will become difficult to distinguish between the processing and accomplishment time of the procedure.
This phenomenon is known as ‘singularity’ and is often defined as “a future time when societal, scientific, and economic change is so fast we cannot even imagine what will happen from our present perspective” (Bell, 2003). Others define this phenomenon as “the singular time when technological development will be at its fastest” (Bell, 2003). These phrases and descriptions of singularity summarize the most strengthened feature of artificial intelligence.
The advocates of Artificial intelligence believe that soon automated systems will take exclusive control of the organizational, economic, social, and other decision-making procedures that are often prone to human negligence and errors. And this control will be so precise and rapid that no human being will be able to compete with it. In other words, due to high precision and rapidness, artificial intelligence will take control of major departments and organizations away from human beings.
However, some still doubt this much rapidness and precision of artificially intelligent systems and argue for the approach through which this precision can be achieved. The advocates respond to this argument by explaining the theory of Moore’s Law (Bell, 2003). They state “that the number of transistors that could fit on a single computer chip had doubled every year for six years from the beginnings of integrated circuits in 1959. Moore predicted that the trend would continue, and it has—although the doubling rate was later adjusted to an 18-month cycle” (Bell, 2003). This clearly explains the technical aspect of how the efficiency of these systems is enhancing exponentially over a period of time. And soon, it will surpass the imagination of average human beings.
However, the future is not all as pleasant as it appears to be. Some researchers have shown how the ability of automated systems and robots to take over major processes can eventually lead to the destruction of the human need for processing. In an article by Sun Microsystems, it was claimed that “We could be the last generation of humans.” Joy warned that “knowledge alone will enable mass destruction” and termed this phenomenon “knowledge-enabled mass destruction.” (Bell, 2003). This apparently horrifying discovery can be a warning for the advocates of transferring exclusive control to artificial intelligence.
The news and articles about revolutionary steps and innovation in the field of artificial intelligence have become a common aspect of today’s life. With each discovery and invention, the future is coming to a step closer. Researchers are very much focused on enhancing the mechanisms through which the use of artificial intelligence can be enhanced as they believe that “The growing power of computer vision is a crucial first step for the next generation of computing, robotic and artificial intelligence systems. Once machines can identify objects and understand their environments, they can be freed to move around in the world. And once robots become mobile they will be increasingly capable of extending the reach of humans or replacing them” (Markoff, 2013). This clearly shows how the developers and researchers have gained control and knowledge over the aspects of robotics that will help in its further advancements.
Considering IoT, in the future, one of the greatest concerns will be about the intuitive capabilities of machine learning and artificial intelligence through the established knowledge base. While it is a common fact that artificial intelligence has the tendency to enhance the efficiency and precision of the decision through automated machine learning and expansion of the knowledge base, however, the characteristic of inductivity is still lacking from this framework. This is possible because “the application of hybrid and [artificial] technology has always been constrained by the capabilities of their constituent processes, in particular the capability to utilize various raw materials in terms of shape and size” . Therefore, future research may focus on how intuitiveness can be induced through machine learning and artificial intelligence that can further improve the capability of decision-making on IoT and other associated domains.
Critically analyzing the evaluating the rapid development of artificial intelligence and its possible impacts on the future of human beings, it can be clearly observed that the hovering threats and risks outweigh the benefits that can be harnessed from artificial intelligence. Even though robotic technology of such an extent has been developed, that can have an IQ level of a certain extent with the capability of making decisions on the basis of its extensive knowledge base. However, this can never be as efficient and precise as a human being can be. Computers can only make decisions on the basis of logic and calculations, which can be immoral and unethical as well. These aspects are not recognized by the computers or roots since these are human attributes and can only be understood with the presence of cognitive and judgmental capabilities on the basis of reasoning, intuitiveness and emotions. And these qualities are only present in human beings, and n robot can achieve them. No matter how efficient and effective these robots are, they can never make use of emotions and intuitiveness in decision-making processes. Hence increasing the risk of faulty or inappropriate decisions that may be challenging from ethical and moral perspectives.
“VanguardSw”. (2013). Decision Support Systems Software. Vanguard Corporation. http://www.vanguardsw.com/solutions/application/decision-support/
Rodriguez, Carlos; Daniel, Florian; Casati, Fabio; Cappiello, Cinzia (2010). “Toward Uncertain Business Intelligence: The Case of Key Indicators”. IEEE Internet Computing 14 (4): 32. doi:10.1109/MIC.2010.59.
Bell, James John. (2003). Exploring the ‘Singularity’. Kurzweil Accelerating Intelligence. http://www.kurzweilai.net/exploring-the-singularity
Lin, Patrick; Abney, Keith & Bekey, George. (2010). Robot ethics: Mapping the issues for a mechanized world . http://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1020&context=phil_fac
Markoff, John. (2013). The Rapid Advance of Artificial Intelligence. http://www.nytimes.com/2013/10/15/technology/the-rapid-advance-of-artificial-intelligence.html?_r=0