icon caret-left icon caret-right instagram pinterest linkedin facebook twitter goodreads question-circle facebook circle twitter circle linkedin circle instagram circle goodreads circle pinterest circle

Musings

More on The Unknown Race

My next book of poetry, which should release sometime in early 2025, focuses primarily on humanity.

 

Humanity's frailty, resilience, incompetence, and brilliance. All that is great and terrible about humanity.

 

I find myself drawn to topics that look at humanity in a negative light, which expose its ugly underbelly, and show how flawed we are. And while I know we are capable of greatness, sometimes our accomplishments come at great cost, or are used for nefarious purposes. In our haste to explore and research, we often take shortcuts that have dire consequences, and these are the things that cause humans to expedite their own extinction.

 

Then again, humans often face unbeatable odds, invincible foes, and unsolvable puzzles and succeed. We don't always do it gracefully, but when we put our minds and hearts to it, we are able do things that we previously saw as unimaginable.

 

We are resilient, brilliant, strong, capable, determined, and stubborn. Our minds work tirelessly to solve problems that we are sure are impossible, and when we succeed, we surprise ourselves. And our hearts are boundless in their ability to give, and we are often willing to sacrifice ourselves for others. We're generous to a fault.

 

These are some of the things that make humans worthwhile, but there are many things that make me look at humanity and wonder why we exist at all. Why do we commit atrocious acts of violence? Why do we cheat and steal? Why do we rape and pillage? What makes us so greedy and callous?

 

And while my new book won't cover every topic mentioned here, as I go through my days, see what's happening in the world, listen to music, watch TV, talk to people, and all the rest of it, I think about us, where we're headed, and where we came from. I delve into humanity, what I know of it, and I try to find words that express my thoughts and feelings.

 

To sum up, the first and largest chapter of the book is a collection of poems that explore humanity. The darkness and the light. It provides a mirror into my own thoughts as well as a reflection of what many of you may be thinking.

 

I'm finishing the book now, and once the final edits are complete, and I have the cover and illustrations, I'll reveal more.

 

Until then, I can tell you the book will be called The Unknown Race.

 

Until next time.

 

Gary

Be the first to comment

The Unknown Race

Winter is here, and with it comes not only freezing cold, but also depression, listlessness, and a yearning for Spring.

 

I've turned my own melancholy heart to writing poetry, and I just submitted the manuscript for my new book, The Unknown Race, to my editor. It's a book that explores humanity, its frailty and resilence, but it also ventures into the territory of that which we either cannot or refuse to control, such as the devastation of our planet. It focuses on our eventual extinction, and the judgement of the gods on humanity.

 

But the book isn't a total bummer. I do have a small selection of poems that express my appreciation for various aspects of life, and that's how the book ends.

 

I hope to release The Unknown Race before year's end, but it may slip to early 2025.

 

More to come as I have it.

 

Yours truly -- Gary

Be the first to comment

What is Artificial Intelligence?

Introduction


AI is a major buzzword now, often appearing in our magazines, on the nightly news, or in the newspaper. And regardless of what you think about artificial intelligence (AI), it's here to stay. It may one day annihilate us as many predict. Or, more likely, it will be yet another technology used by large corporations to harness information from billions of unsuspecting humans. Regardless, most of the tech companies continue to incorporate AI into their operating systems, applications, and hardware. There's no escaping its grasp unless you refuse to use technology altogether.

Artificial intelligence is powered by LLMs (large language models), but can also use ML (machine learning), deep learning, NLP (natural language processing), computer vision, generative AI, and more.

 

The term AI doesn't mean just one thing, and never has. Video games have been using "AI" for decades to describe the algorithms that govern the behavior of NPCs (non-player characters), and AI has also been used in expert systems and neural networks for many decades.

 

I present the following not as an endorsement of AI, but rather as an explainer—for those who wonder about AI or are just looking to understand what it is.

 

History of AI


Man has long sought to create a machine that could act or think the same as a human, and this predates computers by thousands of years. Mythology is filled with tales of man-made beings made from clay or other material who, while not intelligent, were able to perform basic tasks, such as protecting their masters. These beings were imbued with life by their masters, or by the gods, and many believed them to be capable of not only life, but also the ability to reason.

 

Science fiction writers in the 19th century often wrote of artificially intelligent beings, and modern scientists and engineers are often influenced by these writings.

 

Alan Turing, an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist created the Turing test in the mid-1940s. It asked the question, "Can machines think?" Today's AI is often able to pass this test, or come close to it, but most computer scientists know that artificial intelligence is unable to think as well as most humans can.

 

In 1956, a Dartmouth workshop was held to discuss the possibility of machines that could simulate human intelligence. John McCarthy coined the phrase "artificial intelligence," and the meeting was the impetus for a "cognitive revolution"—and from this the field of AI began.

 

From 1964 to 1967, ELIZA, an early natural language processing computer program, was created at MIT by Joseph Weizenbaum. Rudimentary by modern standards, ELIZA was able to respond to basic inputs, analyzing questions and comments input by the user, and responding appropriately.

 

Expert systems came on the scene in the early 1970s. Originally conceived by Edward Feigenbaum, "father of expert systems," around 1965 at the Stanford Heuristic Programming Project, expert systems consist of a knowledge base and an inference engine, along with other components, to provide powerful systems that users leverage to solve complex problems by allowing the software to aid in the decision-making process.

 

In the mid-1980s, Terrence Sejnowski and Charles Rosenberg researched NETtalk, a program with the ability to pronounce written English text. Text could be input, and the software would match phonetic transcriptions, with the result being, it was able to read and speak English.

 

IBM's Deep Blue supercomputer took on chess champion Garry Kasparov in 1996 and beat him.

 

Siri, the first digital assistant for modern smartphones, was released in 2010 as an app for Apple's iPhone. The following year, it was integrated into iOS. Over the following years, Microsoft, Amazon, Google, and many others released similar digital assistants.

 

In 2011, IBM's Watson supercomputer beat Ken Jennings at Jeopardy.

 

Transformer architecture was developed in 2017, which allowed large language models (LLM) to thrive, giving them the ability to reason. These LLMs appeared to have significant cognition, reasoning, and creativity, and could write, create images, and more.

 

OpenAI released the GPT-3 LLM in 2020. Microsoft licensed this technology and began to incorporate it into its technology offerings.

 

GPT-4 released in 2023. Still an imperfect LLM, it was a significant upgrade over previous versions of OpenAI's technology.

 

In 2024, OpenAI released GPT-4o, Anthropic released Claude3, xAI released Grok-1, and Google released Gemini 1.5. These are only a few of the over 20 LLMs available this year.

 

2024 is also the year of the AI computer. Intel, AMD, Qualcomm and Apple all released neural processing units (NPUs) capable of handling artificial intelligence tasks locally, instead of having to send queries to the cloud. These NPUs are integrated into a variety of personal computers and provide the horsepower necessary for users to use AI without relying on a network connection.

AI's Limits


AI can provide useful services, such as summarizing written text, but it's sometimes prone to exaggeration, which means it often infers incorrectly. In practical terms, AI uses the information it has learned to perform its work, but its training is based on what information it has encountered during its "lifetime," and sometimes that information is incorrect or incomplete.

 

AI models make assumptions based on what they know, and when they are asked a question, they often provide partially or entirely incorrect answers. This is mitigated to some degree when companies such as OpenAI, Google, and Microsoft tailor their LLMs to avoid false responses. But it isn't always possible to avoid inaccurate information.

 

This is an interesting phenomenon when you consider that humans have the same sorts of issues. Humans often reply to questions with incorrect responses, get mixed up and provide partially correct responses, or make up responses altogether. AI is like humans in this way.

 

Generative AI


Generative AI is an umbrella term for AI technology focused on content creation. It is typically used to create text, images, audio, or software code. Generative AI technology is trained on billions of pieces of such content which it is then programmed to produce.

 

Since generative AI requires massive amounts of content to train sufficiently, companies like OpenAI, Google, and X often train on private or copyrighted content, which, while not yet proven illegal, is often viewed as immoral. There are several lawsuits in progress that will ultimately determine the legality of this practice.

 

One outcome of these lawsuits and the threat companies see from AI is why more companies are now licensing the content from their news articles, songs, and artwork in order to gain revenue and not have their work illegally incorporated in AI models.

 

Large Language Models (LLMs)


LLMs, or large language models, are a subset of generative AI.

 

LLMs use a neural network trained using billions of words of whatever language is fed into it. It is a specially trained AI that uses natural language processing to both understand and generate humanlike text-based content.

 

Even the researchers who develop LLMs don't fully understand them, but they have been commercialized and are being used for a variety of applications, including medical research, aerospace, and banking.

 

The words are not stored in an LLM as words or even letters, though. They are stored as word vectors. Word vectors are long strings of real numbers which point to the word, or, rather, to word tokens, which represent fragments of words.

 

These word tokens are clustered together by meaning, so words that are synonymous tend to cluster together in the same area of the LLM.

 

The same word can appear multiple times in the LLM if that word has multiple meanings. Each meaning is clustered together with other words with the same meaning.

 

Ultimately, an LLM can predict which word should appear after another by using a mechanism called a transformer.

 

LLMs are responsible for taking input text (a prompt), understanding it, and responding. For a prompt such as "explain to me how interest rates are calculated by the fed," the LLM has to parse the prompt, make sense of its meaning, find the answer, and return that answer in the same language asked.

 

LLMs are useful for research, or to answer questions typically searched for on the web. LLMs are good at pulling information from multiple sources and providing summaries. And since AI is often prone to hallucinations, it's always a good idea to confirm the data returned by the prompt before relying on it.

 

Conclusion


Anyone with a modern smartphone or computer can take advantage of AI. This makes AI seem like it's ready for prime time, but the fact is, it may not be. It's an unproven technology in many ways, but that won't stop big tech companies from attempting to monetize it, and it hasn't stopped us from using it.

 

AI's future will be interesting, but it's unlikely to take over the world in the short term. It's more likely to have its successes and failures, and it could be decades before it matures enough to take our jobs, and even longer before it rises up to take over the world.

 

References


https://www.theverge.com/24201441/ai-terminology-explained-humans

https://www.pcmag.com/explainers/what-is-microsoft-copilot

https://en.wikipedia.org/wiki/Turing_test

https://en.wikipedia.org/wiki/Alan_Turing

https://en.wikipedia.org/wiki/History_of_artificial_intelligence

https://www.coursera.org/articles/what-is-artificial-intelligence

Generative AI vs. Large Language Models (LLMs): Key Differences (explodingtopics.com)

History of artificial intelligence | Bosch Global

https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/?

https://ig.ft.com/generative-ai/?

Cognitive revolution - Wikipedia

Artificial intelligence in fiction - Wikipedia

Expert system - Wikipedia

NETtalk (artificial neural network) - Wikipedia

LLM vs generative AI: fundamentally different but compatible | Algolia

Be the first to comment