How Known (can) become Unknown Through Artificial Intelligence

index

We have been talking about big data and artificial intelligence for a while and it is obvious to discuss their usage in airline industry. According to M. Porter, it is the least profitable business known. Right to security, maintenance, operational cost and so on, it is such a challenging business to consider. Thinking about 9/11 when 4 airplanes were hijacked is another drawback with this industry. Governments have adopted strict regulations at airport and checks have been strengthened in the past and becoming more sophisticated today but there is less attention given to the airplanes, what if there is such situation again? Are those airplanes intelligent enough to tackle these issues themselves, thinking about machine learning, is there any possibility to develop a self defense mechanism inside the airplane which no one knows. It is highly important to understand the security of information when talking about information security. The idea behind this is to make this defense mechanism completely uncertain to the people working for it, and this can build more uncertainty in the minds of terrorists. What if the mechanism starts itself to knock down the terrorists, and there is less possibility to stop it because machines are not emotional and are not to negotiate. When there is such defense mechanism available at numerous points, and when machine learns itself every now and then in a controlled manner but a human friendly defense mechanism which no one knows how it will defend but capable of finding who is the defaulter with the sophisticated programs to get triggered for a genuine reason can help organizations to build a hijack free airplanes. There should be a proper test for these defense mechanisms to be triggered and should be completely silent and there will be less information about its existence on the flight. The reason for not disclosing this information to anyone is the ‘security’. Although there will be questions arising, how can we check/test something which can act on its own? It will create confusion on the whole system which should be managed collectively with highly confidential people in the organizations. The less data we share, the more secure the environment is. Also, leveraging this machine learning’s at the time of crash also can bring some interests in the organizations designing the airplanes. What if the planes repair itself for some minor failures happening in the engine and bring some ideas to avert the crash before it’s too late.

Can Artificial intelligence be used to don’t know what we can know?

Upcoming article –
Object recognition/Image recognition

What is special about the data? why we cherish moments in Sports?

Asking people who are related to these data, asking questions to them about its usages such as if someone has just scored a goal today, but it might be his special football goal when his team won, the player could remember a record build for his team winning the game whenever he scored or eventually from penalty will add more sense on how we extract the data. Likewise asking questions to the people related with the data can give insight for running those queries and harness the data in a relevant manner. This is just an example, there could be people applying for many jobs, and if he is interested in a particular job but due to market conditions, he is looking for other subsidiary jobs available in the market could tell organizations where he really fits? Asking people about the data generated by them can give an idea of how to use those data and for what reason? It is difficult to implement but this could lead to some desired results if done appropriately.

article-0-132FAB10000005DC-757_468x294

Save data, Backup data and Backup again. Do you backup again for no Reason?

images

Not all data is important, in organizations, usually input data’s are more important but what we do for the data once harnessed from the database, save that as well? How many times we create the data, templates and save, thinking of something that might be useful tomorrow, but most of the time they are never used as we build better desire or better insight to take the challenge. It is very important to understand what are we looking for but it should be considered that there is no expert and there is high possibility of basic “trial and error” method to come up to any solution. Teams are fighting for the best alternatives to implement, sometimes for the sake of their department, and sometimes for creativity they can bring inside? But most of the time it just end up in waste of time and opportunity.

Data once harnessed should be either taken care or trashed or there might be some better mechanism to minimize its storage requirements but just saving the methods/procedure/queries to do it again. Some would feel lazy, or argue associating time required to run a query again, if we have data saved which might be used for some reference if we couldn’t bring some more sense to the on-going work. This is done in many instances inside any organization and the CRAP data’s are occupying lot of unnecessary spaces. Big data is getting bigger and bigger but it should be checked if they are really relevant? If not there must be a process defined how to declare anything relevant to be relevant? The process needs a complete brain storming from different part of the department and should flow equally to the relevant areas inside an organization.

The use of hyperlinks with the resultant data after the query can be a way to minimize the space required for the queried data. This can save the storage spaces to a major extent, and without considering the space crunch and giving freedom to run multiple queries, and saving/exporting it to the way one wants. However it would create a boundary where to keep them, probably in the tools where the query has run before the export or other activities are run. Again, there are some issues with multiple backup for double triple security will hinder its effectiveness. It is safe to consider multiple backups in extreme cases but it might lead to problems created with duplication of data and version mismatch if not administered correctly.  Incremental data over the used data again will help reducing the storage requirements in the due course of the activity. There is a need for seeking how backup technology works for the systems to work for data harnessing.

Duplication of data also gets created while providing the inputs, in the process of feeding the values into the system. If there is a possibility to make an efficient system which checks the duplication itself and eliminates it, can bring revolution in the storage space used, and only relevant data will be getting into the system. If there is proper control on what data is going with a creative filter before it is fed, big data can be minimized and controlled. Once we control and minimize the effect of unstructured data initially at this stage, these data probably would need less attention later when dumped for a relevant reason.