and other AI programs

It seems nearly everyone, at least in America, is fully onboard with using all these new AI programs. Some of these programs come without a choice.

LinkedIn’s new AI program, for example, is “helping” paid accounts with their profiles while curating content automatically based on users’ behaviors and interests. People have a choice not to use LinkedIn but they can’t opt out of using the AI programming. The popular online marketplace Etsy has an AI program created to improve product listings.

In 2022 GoDaddy partnered with a security program called Inky. Before the partnership, GoDaddy apparently didn’t take into account the number of clients that use Microsoft Outlook, which is infused with Microsoft Defender – a security program – and has been an active part of Microsoft since Windows XP. This partnering created what I call “the perfect storm”; these two programs don’t work well together, which sometimes causes emails and their attachments to not be sent or received because of the heightened responses via Inky. I have experienced this firsthand through a real estate client.

My Personal Opinion

These AI programs are just that – programs created by people to perform a certain function or set of functions. In the case of ChatGPT or Bard, a user types or states a request either through the desktop or app interface, and the program returns results – very similar to search engine results pages (SERPs). The results of these functions largely depend on the user’s app/program usage and usage of other programs such as social media, queries (i.e., what questions are being typed or asked into Bing or Google), browsing habits (i.e., how much of a page of information does the user read, bookmark, download, and where does the user end up next), what we do and don’t allow our browsers and/or mobile devices (phones) to access.

While it may look like ChatGPT is “writing” answers to queries, in my opinion, it’s merely accessing information readily available from search engines and other sources of information, such as news websites, directories for medical, and dictionaries, including the possibility of files stored locally, not online.

If I’m right, we are potentially opening ourselves up to many different outcomes from choosing to use AI programs, including but not limited to:

  •  plagiarism (i.e., the sources the AI program(s) use to find and return the requested information, copyright and/or trademark laws, etc.)
  •  theft or misappropriation of proprietary information (i.e., the recipe to make Coca-Cola, for example)
  •  use of others’ paid work (i.e., research, artists’ images, artwork, songs, etc.)
  •  loss of our own private information regardless of what permissions were originally given or assigned
  •  creation of fake articles or stories (i.e., sexual assault accusation against lawyer Jonathan Turley by ChatGPT)

A week ago, ChatGPT was asked by a user to create a malware program, and ChatGPT complied. Theoretically, then, if someone typed a query or asked, “How do I make a nuclear bomb?”, would one of these programs actually return the query with the method for making such a device? Up until the malware program had been written by ChatGPT, my research pointed to numerous safety mechanisms put in place to prevent this from happening; between the malware and the fake article against Mr. Turley, I’m no longer sure those safety mechanisms are in place.

If you’re not asking “What’s next?” just yet, you should be. What’s not on the table for these programs which are not yet artificially intelligent? While I would argue the majority of people would use these platforms to get work done more quickly, more thoroughly and accurately (to their individual definition of these terms), I have to really question what happens if someone with less moral conviction did get the recipe to make a nuclear bomb and decided to make one?

In our rush to grab onto the next shiny object to avoid FOMO (fear of missing out), humanity, in general, may be allowing the romance courtship of AI to cloud its judgment.

What are the potential negative consequences humanity is unwittingly accepting through using AI programs? What is anyone’s “plan B” to avoid potential negative consequences?

Just because we CAN use it, SHOULD we?

I’ll muse more about this in future posts.

 

Be strategic. Be visible. Be found.

Ready to start using social media smarter, not harder? Schedule a 15-minute one-on-one coffee chat over ZOOM to talk about strategically incorporating both social media and inbound strategies into your current marketing plan.

Branded ZOOM backgrounds allow businesses to not only add another option for secondary marketing, but also confirm both identity and authority to prospects and customers. Investment starts at $85. Visit our webpage to get started.

#smallbusiness #businesstips #marketingtips #socialmedia #digitalmarketing #visiblymedia #tuesdaythoughts #socialmediamarketing

Author

  • Lisa Raymond

    Lisa Raymond is the owner and creative genius of Visibly Media. She has been in graphic and website design since 1997, social media management & marketing since 2007, married over 30 years, 4 children, 4 grandbabies, and Queen in her organized realm of chaos! Lisa & Visibly Media do not use any AI in the creation of marketing strategies, posts, and graphics.