Thoughts on AI

First, I don’t claim to be an expert, only an informed individual. I read widely from different sources and I work in academia where it seems AI is all anyone talks about lately. I have also read the book Resisting AI: An Anti-fascist Approach to Artificial Intelligence by Dan McQuillan, and highly recommend it.

AI, artificial intelligence, is a misnomer. There is nothing intelligent about it; it does not actually think or analyze anything. AI works on statistical probability algorithms. In other words, a dataset is created with all sorts of information and when you ask a question, the algorithm uses your prompt to look for patterns that create probable matches in the dataset in order to return an answer. This is why AIs “hallucinate“, or as I prefer to name it, make stuff up. Because AI has no intelligence, no brain, to say it hallucinates obscures what’s really going on.

AI has been around long before ChatGPT came on the scene. If you use Amazon’s Alexa or Apple’s Siri, that’s AI. Facial recognition software is AI and “self-driving“ cars are AI. ChatGPT is just another kind of AI. It’s built on large language models. That means it’s fed lots and lots of words from different sources such as Project Gutenberg, your blog if you have one, copyrighted books for which OpenAI and a couple other AI outfits are being sued by authors . If it’s on the internet, it’s pretty safe to say it’s been fed into an LLM dataset.

How does it work? Say you’re in high school and you go to ChatGPT and ask it to write you a five paragraph essay between 500 and 800 words on the meaning of the scarlet A in Nathaniel Hawthorne’s Scarlet Letter, the AI searches its data looking for Hawthorne and Scarlet Letter and assembles a five paragraph essay from what it finds. It doesn’t do any analysis whatsoever, it merely looks at the data it’s been fed and uses statistical probability to assemble an essay from what has already been written about the Scarlet Letter. This is great if you’re a student, but while the grade on the assignment might be a pass, the student has failed to actually learn anything. Studies also show that high-achieving students who who use AI do worse on exams than under-achieving students who use AI.

In the United States, AI can pass the bar exam, the test aspiring lawyers must take in order to practice law. Yet there are a number of instances in which lazy attorneys have used AI to write their legal briefs and then been sanctioned by a judge because the AI cited to cases that didn’t exist.

But AI doesn’t just make up things, it is also biased just like the people who write the algorithms and the data it is fed. And this is where it is most dangerous since a good many people consider it to be a neutral technology.

Say you are a Fortune 500 company and want to make sure the people you higher and promote are going to be the most successful at their jobs. Currently, you’re trying to figure out who to hire for CFO (Chief Financial Officer). So you get some help from AI by feeding into it the resumés of past CFOs who have been supremely successful in the job. Then you feed in the resumés of all the potential people for the position, asking who will be most successful. Since the data you fed in is historically going to belong to well-off white men who attended elite universities and had privilege and access to people and jobs that helped them along their career path to stellar CFO, whose resumés from the batch of candidates do you think will be selected as the best choice?

Will it be the Black woman who attended mostly public schools, worked her way through undergrad and managed to get a graduate school scholarship to a top, but not Ivy League, school? Maybe the man whose family immigrated to the United States from Guatemala when he was a small boy? Will it be the person from Vietnam who attended top-notch schools in Asia before moving to the United States on a work visa? Or will it be the white man who came from a connected, well-off family and attended Harvard as a Legacy and got important jobs at big companies because his family knew people? Companies that use AI to aid in their hiring practices, quickly discover the diversity of their employees, especially at the top levels where they aren’t exactly diverse to begin with, do not increase.

When it comes to facial recognition, AI is terrible at recognizing black faces. There are numbers of instances where black men have been wrongfully arrested and held for hours, even days, because they were mis-identified by facial recognition. Of course, the police are never held accountable because it was the AI, not them who made the mistake.

And that is another danger of AI. Most people have no idea how it works but accept the results as true. When the results turn out to be wrong, who is responsible? The person who accepted the results? The person who made the query? The people who created the data to train the AI? The company who wrote the algorithm? What about those Cruise robotaxis in San Francisco that are no longer allowed to operate because they ran over people? If there had been a driver, they would be in jail. But since there was no driver, the company is under investigation and who knows if any of the victims or their families will receive compensation. Maybe the company will eventually be made to pay some kind of fine for withholding safety information but no one is going to jail.

Dan McQuillan’s book, Resisting AI, argues:

AI is a political technology in its material existence and in its effects. The concrete operations of AI are completely entangled with the social matrix around them, and…the consequences are politically reactionary. The net effect of applied AI…is to amplify existing inequalities and injustices, deepening existing divisions on the way to full-on algorithmic authoritarianism.

Resisting AI, page 3

AI is about quantification and optimization, a neoliberal capitalist’s dream, and it creates a kind of “abstract utilitarianism” as McQuillan calls it, a “calculative ordering” that structures the world in a particular way. Anything that can’t be turned into math gets left out, thus perpetuating discrimination, inequality, and all the other divisions that continue to grow and grow.

Then there are all the things behind AI that nobody talks about because we are too distracted making hilarious pictures of ourselves “in the style of” doing outrageous things we could never actually do.

AI doesn’t magically know what Van Gogh’s painting style is, or what a Corvette car from 1965 looks like. There are thousands and thousands of people hard at work identifying and labeling all the data so the AI can use it. These people are generally in the Global South in countries like Venezuela and Kenya, poorly paid, and forced to work long hours. They get no benefits like sick leave or health care or vacation. They do things like draw boxes around objects in pictures and label them to train self-driving cars. Or they transcribe voice recordings from Siri and Alexa. They tag images from social media and perform all sorts of other tasks including wade through traumatic data that affects their mental health. All so social media companies like Facebook can moderate content, a student can get away with not having to think, and someone can post a picture of themselves as an elf in a scene from Lord of the Rings. If you use AI, you are supporting extractive labor practices.

There is an environmental toll as well. AI requires lots of servers, and servers require lots of electricity. They also require a whole lot of water because servers get hot and if they overheat they go kablooey. The amount of water needed to keep all the servers cool is astounding. And in the United States, places like Northern California, Phoenix, Arizona, and Los Angeles are among the top ten data center locations. Places that are already short of water, not to mention in the case of Phoenix, extremely hot, requiring even more robust cooling systems. There are some who claim AI will help solve climate change, they are wrong.

If these things aren’t enough to get a person to stop and think before using any kind of AI, consider what relying on AI might do to you. If you think the internet has shortened your attention span and Google has made you a little less smart, if you can’t navigate your way around your own city without GPS and you don’t even know how to read a map, what’s going to happen when you start asking AI to write your emails? Or when you let it tell you who you should vote for in an election because you just don’t have the time to do the research yourself? What are you going to do when AI takes your job and instead of using AI to help you work better, you now find yourself as a gig worker labeling data so AI can work better?

AI can be used for good things. In medicine AI is being used to help doctors find cancer and diagnose complicated illnesses. AI is great at sorting through anything where there is a gigantic dataset in physics or other mathematical sciences that would require a human years and years to sort through. An AI can be useful, but not all on its own. AI is not Artificial General Intelligence (AGI) and likely never will be. AI doesn’t think and still requires humans to both feed it and look with critical discernment at the information it produces.

Congress heard testimony in 2023 from various tech people, including the ones who unleashed ChatGPT on the world. Many of them talked about how dangerous AI is (while their companies are making millions off of it). But what they perceive as dangerous—a future with AGI in which the computers decide they are in charge and take over the world—is not the real danger. That is a tech bro fantasyland nightmare. The real danger is in the here and now, to our society, to the environment, to our ability to think critically.

I’d like to say I have never used AI, but since it is being embedded in more and more apps and websites and databases, I can’t say that. I can say I have never willingly used AI, and I refuse to use ChatGPT or DALL E, or anything of that sort. So I’m sorry, you will never see a blog post written by AI here, nor will I tell you how great AI was in helping me plan my garden. When the electric grid goes down and fossil energy is gone, I’d like to still be able to think and write and make decisions and find my way around on my bike or walking without my phone telling me to turn right in .25 miles. I’m not worried about thinking robots taking over the world, I’m worried about unthinking humans.

Reading
  • Book: The Comfort of Crows by Margaret Renkl. This is a lovely book. There is a short essay for every week of the year with sharp observations of what goes on in her backyard. And each essay is accompanied by gorgeous art created by her husband.
  • Article: Global heating will pass 1.5C threshold this year, top ex-Nasa scientist says. In 2023 we exceeded 1.5C for a day or two, this prediction is that we will permanently arrive there on our way to an even hotter planet in 2024. So much for keeping warming below 1.5C.
Listening
  • Podcast: The Great Simplification: Jane Muncke: Perils of Plastic Packaging. Dang this was scary. If you’ve followed me for awhile you know James and I went through a period of moving away from buying food packaged in plastic. There are still a few food things we buy that come in plastic, like tofu, because we can’t buy it any other way. This episode made me think about all the other plastics in our food and home. After our initial purge, we’ve been resting on our laurels, but it’s time to continue to de-plastic our lives. You may have heard recently that there is an alarming amount of nanoplastics in bottled water. When I mentioned this to someone I know who was about to buy bottled water, he shrugged and said, we all have to die of something. It left me rather speechless. While it is impossible to create a plastic-free bubble, especially since the bubble is made of plastic, I prefer to keep as much plastic out of my body as I can, thanks.
Watching
  • Series: First Kill (2022). We’ve only watched two episodes I think and the series lasted only one season, but it’s not bad. Two high school girls, one a vampire and the other a monster hunter, fall in love, and so spins out a tale of star-crossed lovers.
Quote

Apocalyptic stories always get the apocalypse wrong. The tragedy is not the failed world’s barren ugliness. The tragedy is its clinging beauty even as it fails. Until the very last cricket falls silent, the beauty-besotted will find a reason to love the world.

Margaret Renkl, “Who Will Mourn Them When They Are Gone?” in The Comfort of Crows
James’s Kitchen Wizardry

It was a hot chocolate sort of day today. The base recipe we use calls for cashews, but because there are global worker safety issues with cashews, we use hazelnuts instead James also added a dash of mint extract. We enjoyed our cups after coming indoors from 0F/-18C, James from checking on the chickens and tidying the coop, me from shoveling the little bit of snow from the front porch and public sidewalk. And if you are wondering how I shoveled snow, as of Friday I am no longer wearing a sling. My fractured clavicle is healed. More on that next time.

18 thoughts on “Thoughts on AI

  1. This is a really interesting discussion, Stefanie ~ thank you for sharing. It was the last paragraph, in fact the last line, that resonated with me very strongly. I am already concerned at so many people’s willingness to be dependent on algorithms to organise and structure their lives and I do wonder where it is all leading. Whatever happened to thinking for oneself? I am an unashamed dinosaur who doesn’t even have a smartphone; I don’t do apps or GPS (I love a map!) or any of that other stuff so I am very happy to leave AI well alone and carry on as a happy peasant doing things for myself. 😊

    Like

    1. Thank you Lisi! In my younger days when tech was so new and it offered so much possibility in the 90s, I was excited and thought cautious people just needed to get out of the way. How the tech and I have changed! 😀 I am also very happy to carry on as a peasant and do things for myself, even though it took me a little bit to get there.

      Liked by 1 person

  2. Iliana

    Thank you for the book recommendation of Resisting AI. Such a fascinating topic and honestly if you hear newscasters they immediately mention things like robots taking over but there’s so much more to it. For example, I hadn’t even thought about water scarcity! Great points. And, yay for being out of the sling!

    Like

    1. You bet Iliana! The people creating the robots are more likely to take over than the robots themselves. Definitely lots to think about but AI is presented to us as inevitable, and it’s not.

      Like

  3. Ohmigawsh I love hot chocolate after shovelling. And now you’re back to shovelling..woo hoo! The article you’ve linked to has some reliable brands listed, at the end, to avoid the worker safety issues concerning you with cashews; these problems proliferate with almost all nuts, so I’m thankful there are some other avenues since we don’t live in a nut-tree-friendly part of the world. (Which is a problem, too, of course.)

    We should all do everything we can to educate ourselves about AI and the related issues and it’s great you work somewhere conversations are unfolding. Those of us who are cultivating a mindful approach to and global perspective on our relationships with digital tech are already primed to work with these changes.

    But it’s a challenge for sure because many of the points you raise are equally applicable to…the internet…search engines…home computers…cell phones. If we’ve got a phone and a blog, we’re already part of that extractivist model, and if we’re not directly using AI ourselves, the companies providing us with goods and services certainly are, so we are fully participating (and, often, benefitting).

    I’m not friends with Alexis, but everytime my phone tells me the name of a song I like that’s playing (my most common request) I’m sure to thank it extra nicely. Hee hee (I believe that’s still evolving into the kind of AI you’re referencing above, set for this year, which will happen automagically, but I gave permission by virtue of buying and using the phone.)

    Like

    1. Hot chocolate after shoveling snow is the best Marcie! The reliable brands of cashews mentioned in the article are not available here that I have found. We even contacted the company whose cashews our co-op sells in bulk and they told us that they bought their nuts from a variety of suppliers and couldn’t confirm how they were processed. So we stopped buying them. Given how expensive they are, it hasn’t been much of a loss 🙂

      AI has been shoved at us so fast we haven’t had time to stop and think about what it does or how it works and whether we want it in our lives. We aren’t given much of a choice or at least we are told we have no choice. It’s the way of the future and we had better go along or get left behind. I’m leaning heavily toward being left behind 🙂

      Like

      1. Brands vary widely across the border too (one reason is French-language reg’s for Canada, which can be too $ for small eco-aware companies with higher costs related to doing business more ethically). We use a lot of different ethically sourced nuts but are steadily trying to use more sunflower seeds (which can be Cdn grown).

        Was the co-op able to make those assurances about the other nuts they sell, but just not the cashews? Or are you still in the process of exploring all this? This is why so many people buy without thinking, because once you start thinking, there’s so much research! You become a perpetual student.

        It’s been development for many years, but now it’s in the headlines so it feels sudden; we’ve all been making small decisions related to all this stuff all along the way (and opting out of some things, at the individual consumer level, which is still possible, though uncommon). Maybe as you continue to read and learn, it will feel less overwhelmingly bad.

        Like

        1. The co-op can’t make assurances, we have to contact the companies and they can only say they are fair trade, which is why the only tree nuts we buy these days are hazelnuts and walnuts, both of which can be grown in the U.S. Otherwise, we stick with peanuts, sunflower seeds, and other seeds like pepitas, chia, and sesame. Definitely a lot of research! And companies don’t help make it easy since they source their products from all over the place and don’t necessarily own the production or processing.

          Like

  4. Katrina Stephen

    I can’t be doing with Alexis or any of that sort of nonsense and I am refusing to get in the car if J uses the satnav as we never had any problems using maps on our travels. We were directed to the middle of Nottingham (south) by the satnav on our last trip and J didn’t question the direction, he drove straight past the roadsign NORTH – which was where we needed to be! I’m still fuming! He has a PhD in Chemistry – hmmm.

    Like

    1. Hahaha Katrina! We have a similar story using Google to help us navigate when we went to visit James’s brother in Albuquerque quite a few years ago now. We were directed to a dead end street that had no houses on it and told we had arrived at our destination. Stick to the maps! That is if you can find them anymore.

      Like

  5. I haven’t done one of those put my picture into an AI generator things but you’re right, I suppose unwittingly I’ve used AI. It’s all very scary. The older I get the more I want to just look up at the sky and watch birds and forget computers in general.

    Like

  6. This might hurt your heart, but last semester in ASL class I was on a debate team that defended the use of AI in art and won, but I knew my audience and what kind of evidence is most convincing to him, so I really went for it. I had very limited choices in what I could debate. I wanted a topic about students at my university (A BIBLE COLLEGE) being required to volunteer once per week, but I couldn’t find anyone who agreed with that to get my team.

    Like

Leave a comment