One of the first sections I added to my new blog is a reading page. I adore reading, and if I’m not reading, I am often pondering over the things I have read. It’s an obsession, but one I happily embrace. The only problem with my need to track these activities is the standard at which I consider something as read.
There’s been debate online about the distinction between reading a book and ‘reading’ an audiobook. I don’t wish to ignite that discussion now, so I’ll steer clear of it due to my aversion to audiobooks. Despite trying them several times and spending considerable money on them (why are they so expensive?), my brain just doesn’t absorb the information as effectively as it does with reading.
My dilemma isn’t with that particular hot topic; it’s more about the quantity of the book consumed. In recent years, I’ve persevered through books I’d rather not have wasted time on (looking at you, Feel Good Productivity) but did so to finish them. Not because of the vain metrics I set for my reading tally, but simply because I feel I need to. Did I really read a book if I only got halfway through it?
If I did read it, is there a threshold for progress I need to reach? I certainly grasped the point of some books long before the halfway mark. My Kindle history is littered with lengthy books that could have been blog posts, and I’m starting to ponder the wasted time. When you’re 40, life definitely is too short for bad books. So, perhaps I should start abandoning them earlier when I’m confident I’ve understood the gist.
This raises the question: Did I read a book if I can summarise it? If I skipped the book entirely and opted for Cliff Notes, does that count as reading? Following my rationale above, it could be the case. I’m not suddenly going to hack my reading and get AI to summarise them for me - but I might consider it for some dull books.
If the end result is the same, there’s no argument, barring the very real benefits of actually reading the book. Reading a book is quite different from knowing what the book is about. There’s something wonderful about understanding the author and the origin of their words. Experiencing the journey in a well-paced process, rather than being bombarded with a brief summary.
However, this only really applies to good books. Enduring bad ones rarely benefits me, except for the occasional headache, so the cycle continues. Other than realising that I should abandon some books sooner, I haven’t really reached a conclusion in this post. Much like the bad books I’m discussing.
I wrote a few days ago about my personal take on AI being trained on my writing. Although I expected much more anger, hence the rather long block at the bottom, I am happy to see some nice responses and some pushback on the ideas. It sparked several emails, a few text messages, and one very well-thought-through response post.
Erlend on Mastodon raised a very good point when considering other people’s choices:
the way it’s been now, those who would like to choose differently than us, don’t get that opportunity. And I find that problematic.
Which is a very good point. My post was a very personal response to the swirling emotions on this topic, and I had considered other people’s websites. Everyone should, of course, have the choice of whether their data is used or not. Something that isn’t a revolutionary idea, but one that seems to cause issues being enforced on the internet. The EU is making the most, if sometimes misguided, progress on this front with GDPR and the wider DMA.
I particularly enjoyed David Pierce writing and talking about robots.txt. The long-time effort to stop bots from crawling your website, which of course is no more than a handshake agreement with no legal standing - and therefore is of very little use. This leads me to think there may be some way to do it in the future, but I always come back to my original point.
Of course, you should have a choice, and the ability to block what is done with the things you post online, but it takes effort to lock them away if you so wish. Make your account private and put your posts behind a paywall; that should do it. However, your ‘reach’ will be extremely limited, and you might not get the result you want. Users have always been able to copy and paste your words, right-click and save your images; this is just the way it is, and it all comes down to you, and what you want to do - there’s a trade-off with everything.
Kyle Hill in their YouTube video on generative AI:
The Internet feels steadily more lifeless. But that’s because, like those alien civilisations, the real human users are hiding in private apps, servers, and RSS feeds, lest they be beset by these digital predators. This is Yancey Strickler’s dark forest theory of the Internet, something to explain the declining realness of the web.
This tracks with my own usage of ‘the web’. A once vibrant, interactive, and at times time-sucking web now feels, well, a bit boring. I won’t go as far as saying people online don’t exist; there are people around, really interesting people, but at the same time, it feels a bit stale, sucked of the vibrancy that existed a few years ago.
In his video, Kyle explains Yancey Strickler’s dark forest theory of the Internet. The notion that the internet is a dark forest, beset with life, life that is thriving as much as it ever was, but doesn’t make too much noise for fear of the consequences. We’ve learned from the years of living online that almost nothing online is real, and responding to what is real isn’t worth the consequences.
I won’t go into any more detail than that surface-level summary, because the video is well worth a watch, but it played with thoughts about my online life that have been swirling for a while, and I think it might do the same for you.
I am not sure where these private spaces where my internet friends exist now are; perhaps someone could let me know, but they sound like a much better place than living in the dark forest.
For mother’s day in the UK we went to feed the animals at J and J Alpacas. This was a really nice experience and we also saw some lambs being born. Of course I couldn’t resist taking my Ricoh GRiiix along and snapping a few shots.
I’ve been mulling over this clash between AI and the content it’s trained on for some time now. As a frequent user of AI and a regular online publisher, I see both sides of the coin. I’m well aware that the articles I put out there probably end up as fodder for some AI training algorithm. And while I know many writers are upset about their work being used this way without compensation, I personally don’t get too riled up about it.
For me, it’s simple: once I publish something online, I’ve pretty much let it go. It’s out there in the wild, free for anyone to use, maybe even to profit from. And I’m okay with that. It’s a part of the deal you accept when you decide to publish online. Keeping things private is a different story. If I have something confidential to say, I’ll do it face-to-face, away from any prying ears (or screens). Of course, even then, there’s the chance of someone passing it on, but that’s just how it goes.
Writing something down and sharing it online, though, is like leaving your notes in a public place. You’re basically saying, “Here it is. Do what you will with it.” I’ve made my peace with the fact that once I hit ‘publish’, my control over that piece of content is pretty much over.
Publishing online is a peculiar thing. Your work is both yours and not yours at the same time. It’s a different beast compared to traditional print media. You can’t hold onto digital content the same way you can hold a book or a newspaper. It’s more fluid, more elusive.
Here’s an example from my own experience. A while back, after buying a used DJI drone, I had a tough time figuring out how to reset it. I eventually sorted it out and shared the solution online. It attracted a lot of views and even helped me earn a bit through ads. But then, one day, I noticed that Google was displaying the reset steps directly in the search results. There went my little stream of income from that post. It felt a bit unfair, sure, but I didn’t dwell on it. That’s just how the modern web seems to work.
If my livelihood depended on my online content, I might feel differently. I might be more vocal in my displeasure about big tech companies using my content. There’s a lot to get annoyed about with technology, people are putting computers on the face for god sake, but some massive word cloud in a data centre somewhere, training itself on my typos and toddler level grammar, give me a break.