Mr Mobile review of a product I had my eye on for meeting notes, the Plaud Note:
Finally, there’s the question you really have to ask with any product like this. What company am I entrusting with these potentially sensitive recordings? Well, answering that led me down a fascinating rabbit hole into the world of so called registered agents, which are essentially companies that allow certain types of businesses to operate by giving them a physical address for legal purposes…in Wyoming, registered agents don’t seem to need to do any kind of vetting of the companies they represent.
This was not the way I expected the review to end, but it raised an interesting point. There is no claim of anything fishy going on with Plaid, but things like this are why I really love Michael’s reviews. He thinks deeper than most people would about a product that receives masses of our own, and other people’s, data.
Of course, he also raises this about the dozens of phones we do the same with, but this company in question is a bit of an enigma. Of course, if you want to achieve the same result, the alternative is to hand this data to Google or (Not so) OpenAI; however, I think I trust them more.
The type of data I would hand over using the Plaud Note would be personally and business sensitive, and I don’t think anyone should be doing this without knowing exactly who is processing it. With an endless sea of data brokers, that might not be 100% possible, but not knowing who the OEM is in the first place, hard pass from me.
Matt Birchler, writing about the technology used in wristbands at concerts:
Whenever a company says, “We’re using AI to enhance our product,” ask them for specifics. Often, it’s either complete nonsense or something so minor that it’s essentially doing nothing. It’s not always the case, but I think you’d be surprised how much “AI” is mentioned in product marketing as nothing more than a marketing tool to look modern.
The piece linked by Matt is incredibly interesting, but this is the part that resonated with me and confirmed what I’ve suspected for a while. Now that everything is marketed as having AI, I am more convinced than ever that hardly anything actually utilises any type of AI. I’ve seen companies make claims about their products that amount to nothing more than a bunch of IF statements in their code.
I mean, sure, that’s essentially what generative AI is if you zoom out enough and reduce it to its simplest structure, but a significant portion of these products marketed as using AI absolutely do not use AI in the generally accepted sense.
There’s also a deeper, more fascinating shift in our collective psychology occurring. The assumption that even fairly simple gadgets are advanced electronics is a downstream result of all the marketing hype. I am often amazed by what people attribute to the extraordinary, which could be explained quite simply. The wizardry and witchcraft that once filled the gaps in our understanding are now labeled as AI, when in reality, hardly any intelligence is involved at all.
Rex Barrett writing about his ongoing content diet:
… Filling my time with these junk apps is alluring, and I feel good when using them, but I want to find content that takes me somewhere. Ultimately, I don’t want to look back and see hours blocks of time squandered on things I’ll not even remember in a day or two.
My brain goes through these cycles of needing to back away from the web completely, to diving in constantly. It often coincides with bouts of low mental health and other issues, and much like binge eating it is a comping mechanism to distract me from other things.
In many ways I know that snacking on the internet is bad for me, but at the same time I enjoy using it. Overtime I have built up an intolerance to shameless showboating, attention seeking and needless hyperbole which I why I tend to steer clear of Facebook, TikTok, Instagram and similar platforms in favour of decentralised mediums (namely Mastodon). With non algorithmic timelines these things are kept to a minimum and as such as much better for me.
Cory Dransfeldt has great posts about AI, and this one is no exception. Of course, he’s right in his stance on AI-generated images, but as with everything, I don’t find the conclusion so simple.
The images it generates are, at best, a polished regression to the mean. If you want custom art, pay an artist.
As I have covered before, my thoughts on generative AI are mixed. While I understand many of the issues people have with it, I can’t find the same motivation. I use it many times a day to help with tasks that would have taken me much longer; it helps me out with code issues I have, and I also use it to generate images for my blog posts. While I don’t do this a lot, these generated images are a replacement for using stock images. Photos from places like Unsplash are not uncommon to see on the web, pushed by platforms like Ghost and WordPress, and these do not generate any income for artists either.
I would love to have the money to pay an artist to produce images for my blog, but this is not viable, which I am sure is the case for many people. Generative AI just helps me out now and again to make my posts a little more appealing. That is not to say I don’t take onboard the ethical issues; my use of these images has plummeted in the last few months due to this, and in many instances where I would have used an image in the past, I just publish the post with text.
There should be a safe middle ground to cater to these ethical conundrums, as there should be for stopping AI from hoovering up your data - but I just don’t see one happening.
One of the first sections I added to my new blog is a reading page. I adore reading, and if I’m not reading, I am often pondering over the things I have read. It’s an obsession, but one I happily embrace. The only problem with my need to track these activities is the standard at which I consider something as read.
There’s been debate online about the distinction between reading a book and ‘reading’ an audiobook. I don’t wish to ignite that discussion now, so I’ll steer clear of it due to my aversion to audiobooks. Despite trying them several times and spending considerable money on them (why are they so expensive?), my brain just doesn’t absorb the information as effectively as it does with reading.
My dilemma isn’t with that particular hot topic; it’s more about the quantity of the book consumed. In recent years, I’ve persevered through books I’d rather not have wasted time on (looking at you, Feel Good Productivity) but did so to finish them. Not because of the vain metrics I set for my reading tally, but simply because I feel I need to. Did I really read a book if I only got halfway through it?
If I did read it, is there a threshold for progress I need to reach? I certainly grasped the point of some books long before the halfway mark. My Kindle history is littered with lengthy books that could have been blog posts, and I’m starting to ponder the wasted time. When you’re 40, life definitely is too short for bad books. So, perhaps I should start abandoning them earlier when I’m confident I’ve understood the gist.
This raises the question: Did I read a book if I can summarise it? If I skipped the book entirely and opted for Cliff Notes, does that count as reading? Following my rationale above, it could be the case. I’m not suddenly going to hack my reading and get AI to summarise them for me - but I might consider it for some dull books.
If the end result is the same, there’s no argument, barring the very real benefits of actually reading the book. Reading a book is quite different from knowing what the book is about. There’s something wonderful about understanding the author and the origin of their words. Experiencing the journey in a well-paced process, rather than being bombarded with a brief summary.
However, this only really applies to good books. Enduring bad ones rarely benefits me, except for the occasional headache, so the cycle continues. Other than realising that I should abandon some books sooner, I haven’t really reached a conclusion in this post. Much like the bad books I’m discussing.