I'm not very invested into the future of Ai via text yet
Posted: Sat Feb 18, 2023 6:58 pm
Text allows for a certain degree of fakery or deception to be upheld in communication online with strangers...
This is what drives my apprehension and aversion of enthusiasm for chat GPT and other solutions like it lately... As the all too popular schemes of Crypto and NFTs subside, it seems like the same promotional tactics used for marketing those ideas have now transitioned into promoting Ai as a service for everything from image creation, to music generation, to writing term papers. Ai via text or chat is currently being pedaled as a magical fix to many problems suddenly, even though Google Voice in my car still cannot recognize the difference between my friend names Stanley and the famous comic book scribe Stan Lee. In just a few months, the service ChatGPT went from being a free [Beta] test app to being integrated into Microsoft's Bing search service... They are also finding that there are some issues in quickly launching untested "Ai" tools as well, as the have found that the Bing integration isn't always accurate, and returned results at times show bias and even totally false responses... Accuracy of information has never NOT been a requirement in technology, but this trend of quickly launching untested solutions, often in desperation for market dominance or for profit seems to be a rising trend among previously credible companies... We may need to re-evaluate which companies we can trust when they create trust failures of this kind.
I'm not enthused about Bing integrating ChatGPT into it's search service just months after the launch of the product... It seems as testing in production is now a practice of even major companies tasked with our technology future.
We should also recall the failure of self driving cars, automated telephone lines at Comcast, virtual personalities like (the imitation Ai music artist) FN Mecha, and the fact that Roombas still can't detect dog poo on carpet (potentially smearing it everywhere in your house) as crucial signs that Ai as a service is not quite ready for launch in mission-critical (serious) settings. In an era where we are having serious errors of confidence in companies and government, Wells Fargo constantly fleecing banking customers, Train derailment and environmental devastation seen in Ohio, George Santos completely lying about his qualifications and still keeping the job... It's very important to note wholly just how much more screwed up giving scammers more tools to work with can be for all of us.
Whenever I hear about Ai these days I think back to the concept of the "Wizard of Oz"... Where there can be one person behind a mechanized (or scripted) solution that makes them appear larger and more powerful than they are, or where fear, control, and truth can be engineered easily behind a veil... In today's world, this concept can be applied to many scenarios and leveraged by anyone, or even groups of people (From individuals, to companies, to QAnon) in order to influence the world, or to push any agenda. Attacks can also be engineered to work through user accounts on platforms as well, making the threats not just about who is controlling platforms as well.
Text communication very much facilitates the potential for fakery. As Ai needs reference points and learning resources to develop it's perspectives, if those sources of information are tainted or influenced improperly, the output of Ai reflects that as well, just as racism or misunderstanding can be handed down through generations of a family, Ai derived from Ai will also create bad products that may exist unknown well into the future... Flaws and data corruption in Ai are harder to detect when text is used as it's output as well.
If you can recall ages ago when we had IRC and bulletin boards, the textual nature of communication allowed admins to script a lot. Catfishing was greatly facilitated by users being able to fake their gender, wealth, and pretty much every representation they made online... Text communication in 2023 is backwards regression. As we began using images on the Internet more, reverse image generation became a tool we could use to better determine many online scams and fraud, but somehow, in 2023 we suddenly want to go backwards to texting?
C'mon folks.. let's be real here... The narrative is mostly helpful for people that primarily want to deceive others online, and it will create an environment with far less methods of determining what is real and what is fake. It's a grim future when our mobile devices will force us to type all of our communication to faceless chatbots on tiny keyboards... It's not technological progress... At all to be moving in this direction. Also, some key directives for transparency concerning Ai need to be in place now, before it's foisted on us more by these opportunistic companies. It's already been proven that companies cannot be trusted to operate ethically with our private information. Ai piloted by profit seeking companies will only serve to weaponize our private data against us if it remains unregulated.
Using Ai via text (especially for vital communication) will blur the lines of communication between real and scripted personalities. It's going backwards in terms of technological progression for the future in so many ways.
The companies and people advocating for Ai via text are pushing us all towards a new era of deception and scams, and I'd highly recommend avoiding this "Ai via text" trend/inclination, it's not the path to a trustworthy future of communication.
This is what drives my apprehension and aversion of enthusiasm for chat GPT and other solutions like it lately... As the all too popular schemes of Crypto and NFTs subside, it seems like the same promotional tactics used for marketing those ideas have now transitioned into promoting Ai as a service for everything from image creation, to music generation, to writing term papers. Ai via text or chat is currently being pedaled as a magical fix to many problems suddenly, even though Google Voice in my car still cannot recognize the difference between my friend names Stanley and the famous comic book scribe Stan Lee. In just a few months, the service ChatGPT went from being a free [Beta] test app to being integrated into Microsoft's Bing search service... They are also finding that there are some issues in quickly launching untested "Ai" tools as well, as the have found that the Bing integration isn't always accurate, and returned results at times show bias and even totally false responses... Accuracy of information has never NOT been a requirement in technology, but this trend of quickly launching untested solutions, often in desperation for market dominance or for profit seems to be a rising trend among previously credible companies... We may need to re-evaluate which companies we can trust when they create trust failures of this kind.
I'm not enthused about Bing integrating ChatGPT into it's search service just months after the launch of the product... It seems as testing in production is now a practice of even major companies tasked with our technology future.
We should also recall the failure of self driving cars, automated telephone lines at Comcast, virtual personalities like (the imitation Ai music artist) FN Mecha, and the fact that Roombas still can't detect dog poo on carpet (potentially smearing it everywhere in your house) as crucial signs that Ai as a service is not quite ready for launch in mission-critical (serious) settings. In an era where we are having serious errors of confidence in companies and government, Wells Fargo constantly fleecing banking customers, Train derailment and environmental devastation seen in Ohio, George Santos completely lying about his qualifications and still keeping the job... It's very important to note wholly just how much more screwed up giving scammers more tools to work with can be for all of us.
Whenever I hear about Ai these days I think back to the concept of the "Wizard of Oz"... Where there can be one person behind a mechanized (or scripted) solution that makes them appear larger and more powerful than they are, or where fear, control, and truth can be engineered easily behind a veil... In today's world, this concept can be applied to many scenarios and leveraged by anyone, or even groups of people (From individuals, to companies, to QAnon) in order to influence the world, or to push any agenda. Attacks can also be engineered to work through user accounts on platforms as well, making the threats not just about who is controlling platforms as well.
Text communication very much facilitates the potential for fakery. As Ai needs reference points and learning resources to develop it's perspectives, if those sources of information are tainted or influenced improperly, the output of Ai reflects that as well, just as racism or misunderstanding can be handed down through generations of a family, Ai derived from Ai will also create bad products that may exist unknown well into the future... Flaws and data corruption in Ai are harder to detect when text is used as it's output as well.
If you can recall ages ago when we had IRC and bulletin boards, the textual nature of communication allowed admins to script a lot. Catfishing was greatly facilitated by users being able to fake their gender, wealth, and pretty much every representation they made online... Text communication in 2023 is backwards regression. As we began using images on the Internet more, reverse image generation became a tool we could use to better determine many online scams and fraud, but somehow, in 2023 we suddenly want to go backwards to texting?
C'mon folks.. let's be real here... The narrative is mostly helpful for people that primarily want to deceive others online, and it will create an environment with far less methods of determining what is real and what is fake. It's a grim future when our mobile devices will force us to type all of our communication to faceless chatbots on tiny keyboards... It's not technological progress... At all to be moving in this direction. Also, some key directives for transparency concerning Ai need to be in place now, before it's foisted on us more by these opportunistic companies. It's already been proven that companies cannot be trusted to operate ethically with our private information. Ai piloted by profit seeking companies will only serve to weaponize our private data against us if it remains unregulated.
Using Ai via text (especially for vital communication) will blur the lines of communication between real and scripted personalities. It's going backwards in terms of technological progression for the future in so many ways.
The companies and people advocating for Ai via text are pushing us all towards a new era of deception and scams, and I'd highly recommend avoiding this "Ai via text" trend/inclination, it's not the path to a trustworthy future of communication.