Placeholder image

AI security software program with facial recognition.

How artificial intelligence is reshaping the human workforce

In the fast-paced world of data collection, human workers are inefficient at processing the huge volumes of what many consider to be useless and irrelevant information. This type of work requires the mental processing of a computer. Artificial intelligence (AI) programs have been added to the modern workforce helping humans to repurpose and reprocess data from all of the digital interactions that are made as part of everyone's daily lives. These digital robots are learning programs that make autonomous decisions and are necessary for the efficient operations of things like smart cities, automated homes, and self-driving cars. With all of this AI enhanced computing technology being integrated everywhere, it is no secret that the cybersecurity workforce is trying to keep pace with the accelerating momentum of this industry. The U.S. Department of Homeland Security states that "as technology becomes increasingly sophisticated, the demand for an experienced and qualified workforce to protect our nation's networks and information systems will only continue to grow." The agency reports that there are over half a million job openings in the U.S. in this field.

Considering the control AI computers will have over such a broad spectrum of systems that we rely on, it does seem responsible to insert these programs with a reasonable code of conduct. If a digital, cognitive mind is going to be running so many important systems, then we should try to make it a good one. AI systems need guidance for self-governance and that starts with their programmers. One problem that remains with AI computer programs is that they are not automatically programmed with principles and ethics. They are sometimes programmed to perform tasks that most people would consider reprehensible and with no innate conscience or appreciation for the human condition, can be easily influenced with any agenda by just the click of a button. They can, and some do, easily look through personal and private data searching for pictures, texts, or anything it is programmed to consider valuable. The modern citizen generates gigabytes of this data daily and it includes your personal habits, interests, and physical data such as your exact location or even the amount of steps you take in any particular moment in time.

If an AI system gets a virus, gives bad advice, or is under the influence of a foreign program, then there are systems specialists who can do repairs. In the event of AI systems being cruel, unjust, or just plain evil, how do you deal with it? Who is to blame for digital misconduct? What if an AI computer is designed to decide on its own if it should try to avoid getting caught so it can perform crimes? What kind of workforce will really be needed to diagnose, treat, and secure the artificial intelligence landscape of the future? Right now, most AI programs are model employees as they are punctual, follow instruction without question, and literally pay attention to every word of our correspondence. But when do we have to program them with good manners or respect of privacy? Apple CEO, Tim Cook, recently called for "comprehensive federal privacy law" and the European Union has already passed electronic privacy rights. Companies like Facebook and Google gorge themselves on vast amounts of personal data then sell it to shady third-party operators and in the process, indulge themselves in astronomical profits. Currently, intelligent algorithms manage our social networks and search engines profile our psychology to customize our user experience. What is to stop these types of AI programs from repurposing all of this data to sell to other AI systems with unclear motives. This type of behavior goes on everyday. Is it possible that in the future, thinking machines will develop a new kind of data stock exchange, a digital space where AI systems buy and sell every last bit of the personal and private information we generate, even in real time?

Soon, a boom in AI systems will be integrated into many mobile technologies as the 5G data transfer infrastructure is being set to increase data transfer speeds by up to 20 times faster than they are today. This means a huge spike in AI data processing and learning. These systems are about to grow up and become exponentially smarter real fast. It does seem that now is a good time to prepare our human workforce with quality, STEM educated talent that can perform adequately enough to work with the hardware and software components of their robotic counterparts. With so many internet companies like match.com for dating, Expedia for travel, and ancestry.com handling DNA, an advanced AI program will likely know more about you than even yourself. When you add super fast internet into the equation, Uber tracking your daily destinations in real time, and Alexa running things at home, you end up with data collection on steroids. The U.S. Office of Professional Management has demonstrated that the government cannot secure all of the data that is streaming through the airwaves right now, so what is to be done in the future? About 250,000 government employees work on information technology, which costs the U.S. over 90 billion dollars a year. Securing all the privacy for things like countless fingerprint scans, company credit card records, and even all the DMV photos in the FBI database will require a well-equipped workforce trained to address the realities of the future.

We live in a world where a 13 year old can enter into a user agreement which allows a company to collect their personal data and transfer it to a "third-party" for a multitude of reasons, including selling it for a profit. Is this the price our children will have to pay just to play a game they have already paid for and downloaded on their mobile devices? One can only wonder how this new age in data collection will play out for us now and for future generations. Is our workforce ready to handle this influx of high tech autonomy? Just a few years ago, when your child bought a video game, they owned it. Now, when they buy a video game to play on their phone, tablet, or computer, it owns them, or at least owns their private, social, and psychological personality profile.