Rebranding AI as a Source Code Switch
Head’s Up: This blog post turned into a researched and tightly woven (and long!) opinion narrative, which is why I decided to record it for people who like to listen while doing other things. This is not a technically focused discussion, but it does take points from science. So if you’re technically driven and are curious about my opinion, feel free to skip ahead, but know that one idea builds on top of another.
I feel like I’m jumping on the AI opinion train, but from the discourse I see and hear, I think AI is due a rebrand — overdue in fact, since AI is everywhere with constant internet browsing, imaging, and content creation. The source code switch in the title isn’t a programming reference, it’s about finding a different perspective through an energy other than fear. I will go through all of this and start at the beginning with some basic information to show how AI is the next evolution in what has come before, but has the potential to be something better than what it is now.
Creative Look-a-likes
For me, AI in its current form, as seen by the masses on the internet, is about becoming a brand in a certain light. Are you saying the right thing, looking the right way, designing the right discourse between you and your coworkers, clients, and expectations? It seems to be more about duplicating what we see in front of us and being “realistic” rather than offering us a new way of seeing, living, and believing. This includes how we present ourselves online, how we’re trained and marketed to, how we consume media, and all the other ways AI is being used to copy and paste other peoples’ likes and dislikes with our own.
As a brand manager, AI feels disingenuous, but before you get upset, here’s my proof: there are many ways to create content, but who gets to decide which way is ‘right?’ The right answer is: everyone. We get to decide what we create and the type of technology we use to create it. Our creative ingenuity is used to imagine a picture of the end product and then we set to the task of producing it. It’s not about being perfect or ‘branded' in some cookie-cutter fashion, it’s about being authentic, real, and true to who we are as human beings.
Producing something new often gets pulled by the siren call of corporate branding, or smoothing over the rough edges of what makes something unique and therefore human. People are not brands, which I know is the opposite view of the collective, but I will prove my point later as we take a look at what AI does for us now on the internet.
It feels artificial, which makes the ‘A’ of AI appropriate, but branding something as artificial is no way to win over the hearts and minds of people.
Is intelligence artificial? Nature has its intelligence, we are intelligent, but are machines or robots that we train with our information intelligent? We’re still working that one out.
Just as software programs before, AI duplicates what we do — how we present our creative work and ourselves — and provides the creative process with scalability at super speed. But some AI is too constraining and it forces the person into a narrow process, causing them to feel squeezed, frustrated, or not enough, but all they want is to do the thing, so can you please just make it happen?
If you don’t feel satisfied that the content comes from you after using AI, maybe you feel disconnected from the process or forced to create in the only way the algorithm knows how. And then, once complete, who is the owner of that piece of data, which includes images: the person who typed in the parameters, the person who created the algorithm, the company who owns the algorithm, or the AI itself? Can a non-sentient being own something if it is the actual content creator? Ownership with AI is already starting to cause problems because of this nebulous way of producing.
This is not to take away from all the people who are training AI and using their own creative power to see things differently and make it something worth arguing over. All the people involved in its development are super smart and have creative ingenuity in spades, but with power comes force, so let’s make it about being a positive force for change. Even with all the branding and ownership issues, I agree, it’s time to move away from old ways of thinking and doing, and AI has the power to get us there.
I do think the latest AI tech is worth using and I’ve learned there are different ways to categorize it: strong or weak, generative or sentient, or smart and dumb, to name a few, but I see AI as either “internet AI” or “cognitive reasoning AI.” I’ll start with the first one because that’s what we’re all trained on right now. The hype around “internet AI” feels a little overwrought, and here’s why:
Chatbots and voice-to-voice (or talking to a robot) are just the latest version of phone trees, answering machines and other ‘low-tech’ that used to be the latest tech.
Language prompts and other communication devices are another version of our parents correcting our grammar and behaviors to sit up and speak clearly. How many of us feel like broken records (another low tech reference, but also retro!) when we ask our kids to speak up or correct some form of action? I’ve often wanted to record myself and hit play when required.
Simulation for teaching and learning is nothing new, it’s just being done in a different way. Learning how to fly a plane with a computer, examining how to build cities and systems, and other programs are all different types of simulators.
Creation of images through AI is just the newest version of Photoshop and other graphic design software, which also use algorithms that generate filters to turn nothing into something.
Creating realistic looking imagery, whether 2D or 3D motion graphics is nothing new; Pixar perfected it a long time ago. This category also includes architectural software packages like BIM, or building information modeling. What’s different this time is that everyone has access to it, not just the top design and art studios, pulling it into the category of: “anyone can do this, why are you charging so much?”
All of these examples use the latest tech to solve the same or similar problems from the past. In architecture we call it precedent because there is something out there that solved a similar problem, so we look behind us to find the solution. But as creators, we look both backward and forward when solving our design issues to narrow down the choices that get us to a point when we decide to move the idea out into the world. Solving the problems of tomorrow takes a collective of forward-thinking people who understand the past is in the past and not knocking down our door trying to get at us like an army of zombie robots. Is there such a thing?
The collective understanding of AI right now is about a force of change, or being forced to change. Many people are caught in the past with anxiety of losing their jobs to AI. From the tall tale of Paul Bunyan on down the line to all the dystopian movies that pit people against robots, we’ve seen this story before and we know how it will end. Anxiety prompts some to run away and hide their heads in the sand so they don’t have to deal with the collective change that is right before their eyes. But AI is obviously here to stay, so how can we progress our thoughts and understanding to make it about a real power switch, not just one that marches across the internet or our deepest and darkest fears?
The Basics of AI
When I was in third grade, I went to a school called L’Ecole Francaise where I learned not just the French language, but a computer language called BASIC — Beginner’s All Purpose Symbolic Instruction Code. Teaching kids to program computers may seem forward-thinking today, but back then it felt revolutionary. It was the year 1985, which has the curious distinction of being the same year that the song We Are the World was released — more on that later. BASIC is aptly named but we had fun with it, learning how to create shapes and images on the computer screen, which was in one color: green. It helped me understand that the language behind what I’m seeing on the screen can be both simple and complex. It starts with a few simple terms that build on top of each other to form a system that works to create something new. When an error code happens, it’s aggravating because finding its cause can be mind-numbing work. It’s usually a little blip somewhere deep in the lines of code, which is where we’re going today:
deep into the lines of our cultural code to find the blip in our warehouse of things that keeps us from realizing our bright future tomorrow with AI,
and that starts with understanding it from a systems perspective.
AI can be viewed as a system that interacts with other systems and is part of our field of knowledge that we have to look to the future, which requires a belief in a brighter tomorrow while building on the past. There are many systems that make up our world, some we’re aware of while others we’re too steeped in to notice like:
Systems of cultural awareness
Systems of being and feeling, which relates to culture
And systems of modeling behaviors
The internet is another system that’s taken for granted as part of distributed science, which computer systems are part of (nodal rather than centralized). Other types of distributed science are citizen science (like birders) and virtual teams that physically live in different locations but work as a collective. This is a network-type learning, inferring, and creating, which AI is part of. Distributed science says we use different access points to bend and morph ideas or analysis of something. For example, if I’m trying to solve a problem or create something new, I use different resources at different times: the internet, books, an elder who might have historical data, experts, etc. This way of collecting and sorting proves that information can come from anywhere and it’s better that way because it allows for better data leading to better decisions. Similarly, we can model our behaviors to find the ‘internet of things’ or a composite of understanding that doesn’t live within one individual, but in multiple individuals and multiple sources, which is the reasoning behind collaboration.
In my view, this is the better way of doing things because we can’t hold all the information all the time.
The next systems thinking AI moves us to is cognitive science, or the conscious ways we all interrelate to each other, our environments, and the things in it like machines. This approach leads us to distributed cognition theory that states we don’t process all of our information in our heads, we also use external systems including social structures, tools, and technology, putting AI right in the center of all these ideas. AI scientists and creators use all of these scientific constructions to create a model with layers of information formed by algorithms — our new “love ‘em or hate ‘em” belief system. The algorithms train the model with inputted data so the model can learn, giving us a double-take when it comes to using AI to train people. Does it make sense to train robots to train people or can we cut out the middle man? But I understand the reasoning behind using computers to learn, so it makes sense for businesses to invest in a form of training that is cooler than Powerpoint presentations, the untrained AI of the past. Just don’t make your AI read Powerpoint to the people being trained.
Learning Robots
The man given credit for dreaming up AI was both a computer and cognitive scientist, giving him the distinction of being part of two worlds, or two ways of thinking. We should stop right here and talk about the two main descriptions of AI: machine learning and deep learning. Both are part of AI, but use different methods to process and analyze data that is then fed into a network or internet of things through an algorithm so that the model can make a ‘decision’ about the next step. It’s meant to emulate human decision-making, but in my opinion falls short because we make decisions based on past memories, not just past cognition. In other words, we feel into our decisions, use our gut instinct and other modes of intuition, using both our intellect and intuitive processes. For example, there are many business people who are valued for their ‘gut instinct’ when making decisions and who also have the educational backup of grounded information, putting them in the upper echelon of jobs for this type of decision-making.
According to UC Berkley, machine learning uses historical data to predict a future outcome by finding patterns of behavior. This is all our likes and dislikes on the internet that form an amalgam of information. Since the data comes from us, we are the data sets that inform the algorithm that informs the model. And since every like and dislike updates the model, it is constantly changing to adapt to the new information. At least it should. It’s not a perfect process for some, especially those of us who are always looking out for the latest and greatest. How do you call in the new when you can’t see it?
And in some cases, the system seems broken when the historical data suggests something that doesn’t align at all. This is an error code that feels depleting because it sets up a cultural belief that you are not part of the collective. You’re the weirdo in the room, so you’ve been deleted from the data set. If the algorithm is programmed based on an average, or the most popular, then that’s what you get — nothing new or innovative.
Which brings us to my “cognitive reasoning AI,” or AGI — Artificial General Intelligence, which uses deep learning as part of its algorithm function. Again, the branding is off because the word ‘general’ makes it sound too generic, causing some of us to consider it off-brand or not as good as the national brand. AGI is the most exciting part of AI because it uses cognitive distributed science in a new way, or more correctly in an old way because it follows all of our cognitive functions as humans.
Branding is one thing, but definitions are another and it’s hard to get to a consensus on what AGI is. According to some sources, AGI is still a theory while others cite self-driving cars, and voice-powered tech as part of the deep learning that powers it. According to IBM, deep learning is a subset of machine learning that uses neural networks, which trains itself the way humans do. From IBM’s website:
“The chief difference between deep learning and machine learning is the structure of the underlying neural network architecture. “Nondeep,” traditional machine learning models use simple neural networks with one or two computational layers. Deep learning models use three or more layers, but typically hundreds or thousands of layers to train the models.”
And if you’re wondering, a neural network is distributed science that mimics processes in our bodies and is also like the internet. Some days those neurons are firing, some days they’re not because when you ask AI to look up its own definition, some information is just a replica while others are original, but it’s hard to tell what is what. The search function behind the information changes depending on the number of searches and information uploaded to the model (internet), making AI an unreliable narrator. Yes, we can find a definition, but to understand the truth we have to experience it as form in our bodies; we have to discover it for ourselves along with a little help from our friends and other sources, giving distributed cognitive theory an important role in our internet of things.
It now takes a collective of thinkers and doers to get things done around larger and more complex issues, but AGI, as a better cognitive reasoning robot can be put to the task. Wouldn’t it be nice to find a way to feed the information into a system that looks at the lines of our error codes and finds them without the need to retaliate?
Do We Really Even Care? And We Finally Come to the Source
That was a pretty deep dive into the code of AI and I hope it revealed some things, though some may already be aware, while others maybe don’t care, which is fine because right now AI focus is on being faster, cheaper, more productive, etc. This isn’t all bad because moving things off our plates is a great option so we can focus on the behaviors that matter to us, like being creative.
From steampunk robots to slick AI machines, it’s a progression of one step in front of the other, which is a hallmark of machine learning. I used machine learning AI to help me edit my book and with word choice; to help my writing sound less robotic, but as my friend said to me in his best ironic monotone, I used a robot to help me sound less like a robot. In the end machine and deep learning AI don’t produce results, they produce options and opinions of what to do. AI is no decision maker, even if it uses deep learning to come up with the answers. It belongs in our toolbox of things for production and problem solving that can originate with the algorithm, but ultimately is put through our own filters —including cultural awareness — to make the final decision.
When thinking about a model for an AI algorithm that trains people, I thought that it could have a few different modes:
mansplaining mode
white savior mode
I don’t give a shit mode.
If you had your choice of modes, which one would you pick? Of course most of us would pick the last one because it’s funnier than the previous two and much more relatable. While mansplaining mode would have its own type of humor for some, white savior mode is out of bounds in a cultural awareness that computers and robots just don’t get, but hopefully the people training them do, which underscores my point about training a robot to train. Also, do we really care that much about being trained or chatted to by a robot? Not really, we just care about our jobs and rightly so because that’s how we feed ourselves and our families, put a roof over our heads and all the other daily needs in our lives.
All the tips and tricks you can do with AI on the internet can be fun for some, aggravating for others, and eye-rolling for the rest of us. I fall into the latter category because the magic tricks of creating a new image or spell-magic for language don’t interest me. I think creating images and writing are fun, though I am in deep debt to spell check. If it’s fun for you, why would you give it away to a bot? But other people may find creating content, as we call it now, a pain in the ass, so they have no problem giving it away to someone — or something — else. It’s all in how you look at it or experience the world through your nervous system, bringing us to the title track of this narrative: Source Code Switch.
A Source code in this case is not the lines of an algorithm and has nothing to do with computing science. Source in this case is with a capital ’S’ because it is the energy that extends out from our hearts and is part of every organism on our planet. Like a source code for a computer program, it is the grounded nature of our ability to relate to one another and is part of our daily experience in how we view the world. Many of us experience a major source code block, or life block. When this happens, we’re too steeped in it to fully comprehend why things happen the way they do. This is cognitive science at its worst because we don’t see the relationship.
Switching your Source code means viewing your life from the opposite perspective and find healing by believing things are working for you, not against you from a specific perspective. We can apply this idea to AI by switching our code, or perspective, on how AI can work for us, not against us. Find the corrective action by taking the opposite point of view and letting your mind wander to find possibility in giving that part of your task or job over to a machine learning robot. If you feel the pressure of trying to pay your bills, pretend you don’t have any and then feel the freedom of doing your own thing. It’s a thought experiment to free your mind and allow for flexibility to flow through the current of belief in a brighter future.
If AI is changing our perspective on how we can run our businesses, improve our careers and manage our lives through chatbots, interpersonal texts, and image generation, we can do the same for AI, especially as it relates to our livelihoods and the livelihood of the planet, which are interrelated. We have tons of information right at our finger tips, why not use it for something incredibly positive? It’s more than just being efficient or making more things, it’s about solving the big, wicked problems of all our centuries combined, which will be discussed further on.
Algorithms and Catastrophes
Notice how everything is an algorithm now, even my daughter’s math class technique for fractions is called an algorithm. It’s trendy for sure, but we also need to understand the classification better. An algorithm is nothing more than a computer language that is arranged in a specific way to produce a result. It’s the mode of learning for a program or model, not for people to assume as their own language. We have our own languages and they’re all the obvious ones plus math, but most importantly music.
When you think about it, music is an algorithm as a language that can be understood through its notes on a page by musicians, no matter their ethnic background or speaking language, but it’s so much more than that. It’s a connecting device, or connective tissue, to take it to a bodily metaphor because that is where we feel it — in our cores, our hearts. It makes us move our feet, shake our money-makers, and become part of the collective understanding of being in joy. Music may not be the money maker it once was, or in the way it was because of the algorithm of science, but it is no less sentient. It brings us all to a feeling level, as if we aren’t there already, and has the power to help us believe we are all one through the rhythm, tone, harmony, melody, and lyrics.
And now we come to my We Are the World reference. Did you forget about it from all the information in this article? If we are a thinking-feeling planet and Universe, then we are all one through a universal code that says we belong. Belonging means all the technologies we have developed also belong because they are part of our planet’s systems. How we use them is a matter of being destroyers or creators. I believe we are creators because it is innate to our ways of being and can be proven over and over again without much difficulty; creativity is energy and you can’t destroy energy.
We Are the World’s lyrics refer to being positive in our outlook because “we are the ones who will make a brighter day.” But it also shows how the technology of the time played a role because the release team orchestrated a worldwide event that required radio stations to play the song simultaneously. Pre-internet, that meant phone calls without voicemail, mailing cassette tapes internationally, and creating a network of people without the benefit of online chats. This was a huge undertaking and we all benefited by hearing the universal tone of the song at the same time everywhere around the world.
The song’s story can be viewed on the Netflix documentary “The Biggest Night in Pop” directed by Bao Nguyen. When we are all one and singing the same tune or the same music, we feel the musical algorithm of the time and place in our hearts, making us feel electric. We feel our hearts expand and others do, too, impacting our cores as we reach for the sun and see a brighter future for ourselves and future generations.
While the branding may be off for AI, science and math are covered in this aspect with a few good drama names thrown into the mix to keep it interesting. One of my favorites, Catastrophe Theory, part of bifurcation theory and tied to chaos theory, studies dynamical systems through geometry with seven different types of bifurcations called catastrophes.
When one looks at a cube, there's no sense of a catastrophe happening, it seems pretty stable as a shape. But per this theory, every fold and corner of a cube is a catastrophe. The bifurcated line where the fold happens is a point, or point in time, where the system that is in place suddenly changes direction, making it a catastrophe. Perhaps I’m over simplifying the math and science, but that’s how I see it. Catastrophe theory points out to us, through math, that within a system,
some things suddenly change, jump the track, or leap frog over the existing status quo sending us in a new direction.
The algorithm of this state of being says we can’t keep going on this path, it’s time for something new and it can be big in big ways — huge, gargantuan, wicked.
Some people see AI as a catastrophe or chaos just waiting to happen, others see it as an opportunity for the future. But when we jump the track is when it gets interesting, at least to me, an amateur positive psychologist, because I believe in our innate sense of bringing in the new or the future. If I’m doing research on a topic and I suddenly find myself interested in a completely different topic and follow it, I have unbounded myself and moved in a completely new area, or era depending on the situation.
I have folded myself into a new direction in time, like a wrinkle, or Wrinkle in Time, like the book, which also describes a tesseract as a wrinkle that you can travel in time. But a tesseract actually describes the world in 4D, the fourth being time. It moves us beyond our 3D world and into a space and time that allows us to see a brighter future. Imagine if AI could do the same thing for us: help us believe, see and create a brighter future, a better world that’s more willing to change and better able to meet the needs and demands of an ever-expanding network of thinkers and doers on a planet that can’t physically expand, but can take on the power of being part of a collective that believes in the light of tomorrow.
We can fundamentally make that shift in perspective by bringing in all our knowledge to solve the problems of how we’re leaving the planet to future generations. So it all ties together in one big catastrophe. Just kidding. It’s all very elegantly tied together to allow for understanding that we’re all in this together and when we can’t change it individually, we can as a team, a unit, a collective. Or how it can become a new thinking ability that reaches out across the planet in a distributed, cognitive relationship with solving our biggest, baddest problems.
AI and Wicked Problems
I had an instructor in design school that dismissed using computer aided design (CAD) by holding up his pencil and asking if there was going to be a conference for his favorite design tool. He wasn’t wrong, but I think it’s the scalable quality of a technology that causes us to find the function that excites the creator in us. When CAD came along, we were no longer required to bend over our T-squares, constantly sharpening our pencils or cursing capillary action while the ink got sucked under our triangles.
CAD quickly moved through innovations that took it from a 2D version of our drafting boards to 3D objects that contained embedded information. This gave rise to building information modeling (BIM) that requires everything to be in 3D, eliminating the need to hand draw a perspective. Our T-Squares and triangles found themselves in drawers or hung on the wall for decoration — thankfully, I might add.
AI is just another version of this migration to move the processes that keep us from realizing another version of who we are. Yes, we can design, create, make, do one-by-one (some would say bespoke), or we can process a lot of things all at once by giving it over to an innate process of machines, algorithms, and other mechanical functions.
AI is essentially a learning robot that takes on tasks through either machine or deep learning, making our processes easier to accomplish. We’re all learning robots, really, but we’re not machines, which makes for good science-fiction reality TV all rolled up into one. Who’s going to take over the planet in our alternate reality? Robots.
But we’re not there yet, so we can all heave a collective sigh of relief and get on with more important things like having a market square in the middle of a dust bowl, or believing that we’re being smoked out because AI is tricking us to buy things or be vulnerable by exposing ourselves either through our personal data or some form of identity crisis. I believe that is another main fear factor behind AI — being tricked or overtaken by some inanimate object that we can’t fully relate to. We’ve all seen this movie before and Atlas, Blade Runner, and others all have a post script about how humanity is doomed from the very tech we’ve invented.
If AI isn’t a robot destined to take us down in some kind of evil way, is it about becoming better business partners or collaborators? I would say no because machines are not relatable, but they are tools, just like my word processor, food processor and computer processor — we process with our tools but relate to each other. As a generative, distributed form of science through technology, let’s set AI to the task of solving all our world’s wicked problems, also known as those big, hairy, complicated beliefs that we can’t see our way through. I’m talking about oceans polluted with plastics, melting polar ice caps, poverty, air pollution, etc. Short of another lock down like during the pandemic or a travel ban like after the 9/11 attacks, we need to find solutions to these problems because we can’t go backwards.
We know the problems are there but some believe they are just not solvable. This is a force to be reckoned with because AI itself is seen as a wicked problem by some as it sets up the parameters of who we are. If your identity is left out of the chatbot learning program, does that mean you don’t matter or don’t exist? It’s a very real fear for some as women and minorities struggle with this daily even without the chatbots reinforcing biases.
Maybe we’re headed towards a dystopian reality created by a man behind a curtain while the women have magical powers but must succeed beyond all measure of a doubt or become wicked. Maybe we’re already there, depending on who you are and your world view.
I think we should change our perspective on what AI can do and put the internet of things to good use. Or as we covered, let’s use distributed cognition to allow our learning to happen in our brains and environment, in concert with AI and with each other. Let’s use our Source code switch to realize new perspectives that allow for new behaviors to happen in ourselves and our environment, machines included.
Quantum computing is a good example of this and ties into AI. It uses quantum physics for computation, which isn’t a sci-fi reality, it’s reality on a micro-level. While this is a whole different article, I think the process of getting AI to the level of discourse around quantum computing is important because WE are the source of information in the algorithm.
Right now the barrier to quantum computing seems to be about hardware and energy requirements. Despite it following the native process of quantum physics, it consumes a lot of energy, causing the big companies that use it — and AI for that matter — to look for different power options including nuclear power plants, pushing it back into the area of a wicked problem. But I think if it’s truly following the Earth’s native, more natural processes, then it shouldn’t require so much energy, it should be an effortless flow of source to output, but I’ll leave the math to the experts.
The world’s wicked problems are up for a new perspective change that we can’t afford to ignore. From a science and engineering perspective, it makes perfect sense to use AGI to solve them because we just don’t have the time to keep trying the way it’s always been done. Individual actions do add up to a whole that create an evolution of change, but we also need to tackle it from a systems perspective because it’s the overlapping systems that make present issues so difficult and sticky compared to previous times. In the past it was about either stopping or starting something, but today’s world is too intertwined, too fast paced, and perhaps too big (or little) to fail. We can’t go backward, we have to move forward, but we also can’t keep moving in the same direction of constant extraction of people and natural resources on our planet. Let’s move our wicked problems off our plates and let AI solve them, not from an extraction point of view, but from the collective wisdom of knowing how to believe in the future. It will be well worth our effort.