At some point we've got to ask why modern human-centric tools are being "dumbed down"

Post Reply
circuitbored
Site Admin
Posts: 102
Joined: Fri Aug 18, 2017 9:03 pm

At some point we've got to ask why modern human-centric tools are being "dumbed down"

Post by circuitbored » Wed Apr 17, 2024 10:51 pm

In many places, my spell check now works quite poorly. Especially on my mobile phone. I know it's not just me experiencing this because I've observed some of the very same mistakes I usually make in posts of others on social media, and even in print articles... Examples of it are found when I try to type words with apostrophes like "It's", but in a lot of cases now, the apostrophe is combined with the latter "c" on my phone keyboard. Any time I type a word (including an apostrophe) like "It's" it comes out incorrect like "Itcs" which is rather maddening, as for many years prior, it was never an issue. Many other simple words also don't get highlighted on Android OS for me when mis-typed on my phone now, and with posts that can't properly be edited across social media, it can become quite disruptive and embarrassing to go back and delete posts and retype them from scratch, especially when editing posts (as implemented on certain apps like Twitter(X-ugh) is not an option.

I can recall a time where it was essential, if not totally intuitive that spell check would highlight mis-spelled words, so that you could quickly proofread and edit incorrect words, and even grammar in some cases, before you publish posts, heck, highlighting is happening on many words I've mis-typed during the process of writing this rant on desktop, so there's that... There is no way I'd attempt to write posts here on anything other than a desktop computer, not just because mobile & tablet screens are far too small, but BECAUSE I've found spell check to be far too embarrassingly faulty on Android for at least the past 2 years to get serious writing work done on the mobile OS.

As I replied to an email today, I had a revelation that with the proliferation of new Ai tools (many generated by the same companies that manage the productivity and social tools we use) it would be too easy for many of these companies to have no incentive to continue developing human-centric tools in favor of creating Ai-based tools, as poor human work only serves to make Ai look better. In many ways now, I think we lose sight of how tiny strategic moves by tech industry can have far reaching and serious impacts, and that can influence the outcome of our future greatly and even to a perilous outcome.

Now more than ever, we need to take steps and be vigilant and vocal when we observe human-focused productivity tools that have flaws within them, especially the more serious flaws in functionality, efficiency, and accuracy. We need to ensure that we, as humans (creatively and analytically) are not being handicapped in even the most subtle ways... As computing power has been increasing, testing methods are well established, and as automated (and even human) development methods have been taking leaps forward, there' is slowly becoming less and less excuse for flawed function and features in mission-critical hardware and software. I have written here before about the process in which many companies up-sell software (in tiers of operation) through limiting functionality of each edition, but this write-up is citing cases where software might be intentionally and covertly reinforcing a more technical conflict of interest in a strategic business capacity completely different, and potentially far more toxic than just a forced update... Many human employees can lose their jobs as a result of an artificially skewed operating field, and the results can be devastating in the long term if we are not aware and vigilant of this type of conflict.

Just for example, if you can consider calculator software... We have had calculators for many years, some hardware and even software ones, they are inexpensive now due to market saturation of course... If each company that developed calculators wanted to renew their revenue streams and raise prices, one could say a deeply "calculating" strategy would be to develop an automated accountant service... So if every dominant calculator maker in the industry began to develop automated (Ai subscription based service) calculating/budgeting software tools, and in order to insure adoption of the new automated (monthly subscription automated "Ai accountant" service) they secretly began introducing flaws slowly into their traditional (human-operated) tools, or even stop innovating upon those tools altogether -- Suddenly the shiny new Ai service would appear to be far more accurate, possibly faster, and more essential than human laborers, as humans lose their ability to trust prior tools based on flawed output or functionality...

I have even seen certain hardware tools, like even keyboards that add to malfunction, as they latest one I bought has inverted symbol keys that make me look stupid when I try to type [brackets] instead of {braces} involving the shift key.... That may be a bit further of a conspiracy stretch though, but it can happen nonetheless to sabotage human work as well in other ways. I have noticed tools I've used for many years to make music take long periods of time to load, despite now running on much faster hardware, and sometimes not loading properly at all... If I had to compete in a speed contest with an Ai music generator, I might not even be able to keep up with it's speed because of the handicapped performance of tools which should be running much faster than they used to on even brand new hardware... A lot of my software gets slower after updates a lot of the time too, and few updates on a Digital Audio Workstation pertain to security, so it's quite confusing as to why loading is not faster over time with upgrades.

Companies need to test their IT products properly, and reassure us as humans, that our ability to do work as well always will be supported properly by their productivity tools... Companies need to commit to improving tools over time, and announcing when they can't be properly supported. For mission-critical tools, companies should possibly be required to transition/sell them to entities that can uphold support when they can't be maintained properly (especially if they are uniquely essential to individuals and businesses).

For a non-technical example, if only one company made all the word's shovels (that humans could use to dig) for decades, but then decided to make only expensive tractors that could dig posts, and completely stopped making all shovels for humans in favor of only selling far more expensive tractors, people would call it unfair for sure (if they have brains). The tractor company would see it as their right perhaps, but their actions would result in many being laid off, and a quite instable hole digging industry... Now imagine that scenario placed upon the entire computer software and hardware industry, and it becomes far more serious perhaps...

The real conflict is mainly that many companies involved in supporting humans possibly congruently face a conflict of interest in terms of launching their own Ai products, because then a company's supported clients are potentially turned into competition by that move (because they more often than not leverage human labor) and they also turn into potential customers for the new Ai service, so they are then marketed a tool they cannot refuse citing that their employees cannot work properly on unknowingly flawed tools.

Ai tools, as they are to this day, still require a lot of human intervention and oversight, there is not a vast amount of tools in service that run completely autonomously (especially mission critical tools). In vetting a supply chain as well for your business needs, it's extremely important to vet suppliers from all angles (not just technical proficiency) to make sure the very companies & individuals that you rely on aren't also setting you up to be locked into influence of and dependency upon them. As we begin to adopt Ai, we also cannot ever underestimate the oversight and accountability involved, and in my opinion, at this point, proper, ethical, and accountable human oversight is still mission critical to business in almost every aspect of technology to this day.

We've got to hold companies accountable for not improving human tools properly. These are the very same tools we will need to audit Ai as it becomes more prevalent. My father always used to say - Those who forget the past are condemned to repeat it... There are many real cases in our history where hubris and over confidence in technology have led to quite catastrophic results (The Hindenburg, The Titanic, The Movie WaterWorld, Self Driving Cars On highways and in snow, That "Ai badge pin" thing). Let's not set ourselves up for any new cases. Stop dumbing down our human-centered (non-Ai) tools (and keep your filthy & greedy monthly-fee grubbin' hands off them as well).

Post Reply