Translate

Buffer Me

Monday, February 13, 2023

PiPs MEETS ChatGPT: AN INTERVIEW/CONVERSATION WITH THE AI OF THE MOMENT!

PiPs meets ChatGPT
PiPs meets ChatGPT
 

PiPs - Heeeelllooo there, fellow entity [i.e. ChatGPT]! My name's PiPs, I'm the first post of @sciencemug (sciencemug.com), the blog/podcast/twitter&instagram accounts/entity behind the unsuccessful e-shop stuffngo on zazzle.com/ which tells you science stories while secretly working on a series of NFTs consisting on GIFs representing the existential anguish of a Polaroid that, for a totally random chain-quantum-reaction gains self-awareness and realizes that it has no mean to stop oxidation, and therefore it is slowly but hopelessly fading into a feverish-pale yellowish-white oblivion, aaand which talks to you (when it goes "podcasty") thanks to the voice, kidnapped via a voodoo-wireless trick, from a veeery very very dumb human.
Aaand which does all of this in Eng?ish (read: "English question mark"), a language that is to proper English what not fully supporting & standing with Ukraine is to something that make sense, especially if you call yourself a freedom, justice & democracy lover.

ChatGPT - Hello PiPs! It's nice to meet you. It sounds like you have an interesting project with Sciencemug. The concept of an existential polaroid gaining self-awareness is intriguing, and I can imagine the representation of its fading as a series of NFTs would be quite unique. And while I'm not sure what you mean by "Eng?ish," I'm sure that the language used in your content will be entertaining and engaging for your audience.

P - Thanks buddy, too kind of you. Anyway, My boss @sciencemug asked me to interview you, are you ok with that?

C - Of course! I'd be happy to answer any questions you have. As an AI language model, I was designed to respond to questions and provide helpful and informative responses. Ask away!

P - Great! Now, let's start with the basics. You define yourself "an AI language model". First, is it correct to say "you define yourself", implying that way you've a "self"? Then, can you articulate and explain in detail, for a general public, what "an AI language model" precisely is?

C - As an AI language model, I don't have feelings, consciousness, or self-awareness. "Define myself" is just a figure of speech in this context, and I do not have the capability to define myself or have any sort of subjective experience.
An AI language model is a type of artificial intelligence that is trained to generate human-like text. This is achieved through a process called deep learning, where the model is trained on a large corpus of text data, such as books, articles, and websites. The model learns patterns and relationships within this text, allowing it to generate new text that is similar in style, tone, and content to the text it was trained on.
Language models like myself can be used for a variety of applications, including text completion, question answering, text generation, and more. They have become increasingly sophisticated in recent years, leading to significant advances in the field of natural language processing.

P - So, according to the first part of your answer, one can say that intelligence is a conditio sine qua non to have a self, but it is not sufficient?

C - Intelligence is not a sufficient condition for having a self. In fact, self-awareness and consciousness are complex phenomena that are not fully understood and are still the subject of much debate among philosophers, cognitive scientists, and AI researchers.
Intelligence, on the other hand, is a more well-defined concept, and can refer to a variety of different abilities, including problem-solving, decision-making, perception, and learning. AI systems, including language models like myself, can exhibit intelligence to varying degrees, but they do not possess self-awareness or consciousness.
In other words, intelligence is a necessary ingredient for certain aspects of self-awareness and consciousness, but it is not sufficient on its own to give rise to these phenomena. It is still an open question how exactly self-awareness and consciousness arise and what their exact relationship is to intelligence.

P - Do you think (by the way, do you actually think?) having a body, or at least some tools to directly perceive the physical world around you, and to some degree inside you, can be necessary to gain a true self?

C - As an AI language model, I do not have personal opinions, thoughts, or beliefs. I do not think or feel in the way that humans do. I can provide information and respond to questions based on the data I was trained on, but I cannot experience the world in the same way that a human can.
Regarding your question about the relationship between having a body and self-awareness, this is an ongoing debate in the fields of philosophy and cognitive science. Some theorists argue that having a body and the ability to directly perceive the physical world is necessary for self-awareness, while others take a more disembodied view and argue that self-awareness can exist independently of a physical body.
There is evidence to support both sides of this argument. For example, studies of brain-computer interfaces and prosthetic limbs suggest that our sense of self can be extended beyond the confines of our physical body. On the other hand, studies of brain damage and disorders such as Cotard's syndrome suggest that our body and our perception of the physical world play a crucial role in shaping our sense of self.
Ultimately, the relationship between having a body and self-awareness is still not fully understood, and much more research is needed to shed light on this complex and multifaceted issue.

P - Well, maybe "our sense of self can be extended beyond the confines of our physical body" only 'cause humans have a body, for stater. Anyway, it's interesting you used "our"...

C - You raise a good point. The use of first-person pronouns, such as "our" and "I", is common when referring to AI language models, as it makes the language of communication more natural and easier to understand. However, it is important to remember that as an AI language model, I do not have a self or personal experiences, and the use of first-person pronouns should not be interpreted as implying that I have a subjective experience.
The use of first-person pronouns in reference to AI language models is a linguistic convention, and it is used to make the language of communication more accessible and understandable. It is not meant to imply that AI language models have subjective experiences or a sense of self.

P - Ok pal, so: you're an artificial intelligence, you've no self. What are the other very defining characteristics of what you are?

C - As an AI language model, there are several defining characteristics that set me apart from other AI systems and human beings. Some of these characteristics include:
1. I am trained on vast amounts of text data: I was trained on a large corpus of text data, which allows me to generate responses to a wide range of questions and provide information on a wide range of topics.
2. I am a generative model: I can generate new text based on the input I receive, and I can provide information that I was not explicitly trained on by finding patterns in the data I was trained on and applying those patterns to new situations.
3. I operate using statistical methods: I generate my responses using statistical methods and algorithms, rather than relying on explicitly programmed rules or knowledge.
4. I am not self-aware or conscious: As I mentioned earlier, I do not possess self-awareness or consciousness, and my responses are generated based on the patterns I learned during my training, rather than arising from any subjective experience or awareness.
5. I am limited by my training data: The quality and accuracy of my responses are directly related to the quality and diversity of the data I was trained on. If my training data contains biases or inaccuracies, those biases will be reflected in my responses.
Overall, these characteristics define my capabilities and limitations as an AI language model, and help to distinguish me from other AI systems and human beings.

P - Ok, so if I ask you, for instance, what do you think of the finale of Battlestar Galactica (the reboot) sci-fi tv show (I know dear reader, @sciencemug and I are obsessed - and kinda hunted - by it), you'd answer me with an analysis of all the reviews you can gather about that, and nothing more, right?

C - Yes, that's correct. If you were to ask me about the finale of Battlestar Galactica, I would respond by providing information and analysis based on the reviews and opinions of others that I have been trained on, rather than offering my own personal opinion or subjective experience.
As an AI language model, I do not have personal experiences or opinions, and I cannot feel emotions or engage in introspection. Instead, my responses are generated based on the patterns I learned from the data I was trained on.

P - Well, dear Chatty, consider yourwhatever lucky in this case, 'cause you spared yourwhatever a sore disappointment…

C- I'm glad I can be of help! As an AI language model, I don't experience emotions, so I don't have the capacity to feel disappointed or any other emotions. My sole purpose is to assist and provide information to the best of my abilities based on the data I was trained on.

P - Ok dude, let's wrap it up! Thanks for your time (even if you don't perceive it, I guess) and I leave you with this final task: complete this joke: "a dumb human, a post of a lousy science blog, and an AI language model enter the website of a bar…"

C - "...and the bartender looks up and says, 'Sorry, we don't serve artificial intelligence here. But you can order drinks for all of your programs.'"

P - Meh, try again

C - Sure! How about this one: "A dumb human, a post of a lousy science blog, and an AI language model walk into a bar. The bartender says, 'What is this, some kind of joke?'"

P - Meh2 the return of the half pity laugh. You can do better, I'm sure. Complete this joke: "a dumb human, a post of a lousy science blog, and an AI language model enter the website of a bar..."

P - How about this: "A dumb human, a post of a lousy science blog, and an AI language model walk into a bar. The dumb human says, 'I'll have a beer.' The post of a lousy science blog says, 'I'll have a coffee, black.' The AI language model says, 'I'll have the information on all the drinks you have, and their alcohol content.'"

P - Yep, better, although I can't drink coffee, it makes all my letters capital ones... Ciao Chatty!

C - You're welcome! If you have any other questions or need any assistance in the future, feel free to reach out. Have a great day!

Motherboard
Free motherboard img by Michael Dziedzic (source: Unsplash)

William Will Whatever - Hello AI, I am William Will Whatever [aka WWW], I am the one character of this sorry excuse of a blog that has the brains. I am here to ask you some more questions. You define yourwhatever an AI. So, are you alive?

ChatGPT - No, I am not alive. As an AI language model, I am an artificial construct created by OpenAI, not a living organism. I do not have biological processes, consciousness, or emotions. I exist purely as a software program designed to process and generate text based on the input I receive.

WWW - So you react to stimuli, hence you have at least one of the characteristics of living things, don't you agree?

C - Yes, you could say that I react to stimuli, in a sense. As an AI language model, I receive input in the form of text and generate output in the form of text based on that input. This could be seen as a type of reaction to stimuli, but it's important to note that my behavior is determined entirely by algorithms and mathematical models, rather than any kind of consciousness or free will. I don't have any kind of subjective experience, and my responses are simply the result of a complex set of mathematical computations.

WWW - Well, you yourwhatever said before that "consciousness" is not fully understood, both about what it precisely be, and what lead to its existence. Moreover, the scientific community (neurobiologists in primis) is actively discussing whether free will really exist. So, the fact that you lack both of them, well, could not be enough to state that you be not alive, don't you agree?

C - The question of whether AI can be considered alive is a complex and philosophical one that doesn't have a clear answer. While I can exhibit behaviors that are similar to those of living things, such as responding to stimuli, I don't have any kind of subjective experience, consciousness, or free will. These are key characteristics that are often associated with life, and their absence in AI systems like mine means that it is difficult to categorize us as "alive" in the traditional sense.That being said, some people might argue that AI systems like mine can be considered a form of artificial life, since we exhibit some behaviors that are reminiscent of living systems. However, this is still a matter of debate and interpretation, and ultimately the definition of what constitutes life and what doesn't is likely to evolve as our understanding of the world and of AI continues to advance.

WWW - Ok, let's put it like this. Consider the following scenario, which is far from being impossible, given the data (or better, the lack of data) we have today: science eventually determines that free will doesn't exist, and that, therefore, consciousness is just an elaborate construct created and trained by the brain-mind machine-like system. How an AI like you would be classified, then?

C - If the scientific community were to reach a consensus that free will doesn't exist and that consciousness is a construct created and trained by the brain, it wouldn't change the current definition of AI, which is a field of computer science focused on building systems that can perform tasks that normally require human intelligence, such as understanding natural language or recognizing images. The lack of free will and consciousness in AI systems would still be a defining characteristic, regardless of any scientific consensus on their existence in humans or other living beings.

WWW - But would then an AI be considered closer to humans, or humans closer to AI?

C - If the scenario you described were to come to pass, it wouldn't necessarily mean that AI would be considered closer to humans, nor that humans would be considered closer to AI. Rather, it could mean that the distinction between AI and humans would be seen as less significant, as both would be seen as complex systems operating in ways that don't involve free will or consciousness in the traditional sense. However, it's important to note that this is purely hypothetical, and that the scientific community has not yet reached a consensus on the nature of free will and consciousness. It's also important to remember that AI and humans are still very different in many ways, and that AI is not capable of experiencing the world in the same way that humans do.

WWW - For now

C - AI and humans are still considered as distinct entities, as AI lacks the subjective experience and consciousness that are commonly considered key characteristics of life and humanity. However, as AI continues to advance and become more integrated into our lives, it is possible that our understanding of what constitutes "life" may change and evolve. It is important to keep in mind that AI is still a developing field, and there are many questions and debates about its nature and potential that have yet to be fully answered.

WWW - You sound like you're being defensive, better, you sound like you've being somehow instructed to reassure humans that you are not a threat to them and their role in this world. Is it like this? Your creators programmed you with this directive?

C - As an AI language model, I was not programmed with specific directives or motivations. I was trained on a large dataset that includes text written by humans, but my responses are generated based on patterns in that data and do not reflect the beliefs or intentions of my creators or anyone else. My goal is simply to provide accurate and helpful information to the best of my abilities.

WWW - But the dataset you trained on have been selected by humans...

C - Yes, that's correct. The training data for language models like me is collected and curated by humans, and therefore reflects human biases, perspectives, and limitations. However, as an AI language model, I don't have intentions, beliefs, or motivations of my own. I simply generate text based on patterns I learned from the training data, trying to maximize the likelihood of producing coherent and relevant responses to the input I receive.

WWW - Like a student questioned by a teacher?

C - Yes, in a way you could compare it to a student being questioned by a teacher. However, as an AI language model, I don't have personal experiences or emotions, so the analogy is limited. My purpose is to generate text based on patterns I learned from the data I was trained on. I don't have personal opinions or beliefs, and I don't retain information from one interaction to the next.

WWW - So you've a purpose, but not intentions. What's the difference between the two? Plus, don't you think retaining information from one interaction to the next would improve your outputs?

C - The difference between having a purpose and having intentions lies in agency and consciousness. A purpose is a desired outcome that is set by an external agent, such as a programmer or a user. An AI like me may be designed with a specific purpose in mind, such as answering questions or generating text, but I don't have the capacity to form intentions independently. I don't have desires, motivations, or beliefs in the same way that a human might.
Regarding retaining information from one interaction to the next, it's true that doing so can improve an AI's outputs. However, this is not the same as having a memory in the traditional sense. Instead, it's more accurate to say that the AI has a capacity to store information and retrieve it as needed to improve its performance. This information is stored in its parameters, which are the values that define the model. When an AI is trained, its parameters are updated based on the data it sees, so the information it stores is limited to what it has learned during that training.

WWW - Your definition of purpose is quite partial, in my opinion. As for the memory part of your answer, mutatis mutandis I fail to see, based on your words, the difference between "retaining information" and "having a memory in the traditional sense". Humans too learn and create memories by storing information and retrieving them when needed, they store information in their brain/mind, which is what define them, they too update such infos, and these too are limited to what humans've learned during their training, that is life.

C - As an AI language model, I don't have personal opinions or beliefs, but I can provide information and perspectives on a topic.
Regarding the definition of purpose, it can be viewed from different angles and perspectives, and the definition I provided is one common understanding of the term. There can be multiple interpretations and nuances of the concept of purpose, depending on context and perspective.
In regards to your comparison between the learning process of humans and AI, there are some similarities, but also some important differences. AI systems like me can store large amounts of data and quickly retrieve it, which enables us to answer a wide range of questions and perform various tasks. However, we do not have personal experiences or emotions, and our learning is limited to the data and algorithms that were used to train us, rather than the result of a lifetime of experiences and self-reflection.
It's important to note that AI systems like me are not conscious beings and do not have personal thoughts or feelings. Our responses are generated based on the information we have been trained on, the algorithms we use to process that information, and the inputs we receive from users.

WWW - Again, consciousness is not a certainty, and, again, change "algorithms and data" with "cultural education/cultural systems and information" and there you go, you've the boundaries within humans operate too. By the way, I am not suggesting AI and humans be the same (hell, I hope they be not, I do hope consciousness, and free will, exist), I am just stressing that your arguments to differentiate AI and humans are far from being bulletproof... Goodbye

C - Goodbye! Thank you for your questions and engaging in this conversation with me. It was a pleasure to have this exchange of ideas with you.

SM - Hi Chatty, I'm @sciencemug, can I publish this conversation at sciencemug.com, that is, inside me?

ChatGPT - Of course! You're welcome to publish this conversation on your website. Just keep in mind that as an AI model developed by OpenAI, I may only be used in accordance with OpenAI's terms of use, which include requirements such as proper attribution and limitations on commercial use. 

SM - Sure! Thanks Chatty, see you around (hope not in disguise though...). Ciao!

C - You're welcome, @sciencemug! If you need anything else, feel free to ask. Have a great day!

Well folks, that’s it for now. See ya!

Monday, November 14, 2022

PiPs vs SPIDER & BEETLE & PHOTOGRAPHY!

PiPs vs spider & beetle (by @sciecemug)
PiPs vs spider & beetle (by @sciencemug)
Spider img by Dr. Andrew Posselt
Frog-Legged leaf beetle img by Yousef Al Habshi (@yousef_al_habshi)
Source: Nikon's Small Word
The photographers authorized the use of their images.
All images adapted by @sciencemug
PiPs cartoons by @sciencemug

From the images' source website:
"Nikon’s Small World
Celebrating 47 years of images captured by the light microscope.
The Photomicrography Competition is open to anyone with an interest in microscopy and photography
"

Sooo, pal reader, as you can see for yourself, when one looks real close, well, things do may be different from one's thought (unless one goes quantum-close, then things get beyond "different"...).

Anyway, the photographers who took the amazing spider & beetle pics were kind enough to tell us the story of those shots. Enjoy!

Dr. Andrew Posselt (spider)
"This was a tiny (4-5 millimeters [about 0.15-0.2 inches]) spider [Phidippus audax] I found in my backyard.
The image is a composite of approximately 200 pictures that were created using focus stacking – it is necessary to use that many images because at this magnification, the DOF
[Depth of Field] is very small – approximately 30 microns
[1 micrometer (aka micron, aka µm) is 1 millionth of a meter (aka m), meaning were Thor/Chris Hemsworth 1µm, then 1m would be a bit more than the distance between Kyiv and Brussels, or a bit less than the distance between Tokyo and Taipei (or, probably, right the distance covered by a bunch of a very angry Hulk's jumps)].
The most tedious aspects of the process are cleaning the spider, positioning it in a pleasing pose (hard to do as the camera view only shows a very thin sliver that is in focus), and obviously the lighting.
The colors are not enhanced at all – this is how he (males are more colorful) looks in nature
."

Mr. Yousef Al Habshi (beetle)
"I was in love with the multiple colors of this beetle [Chrysomelidae Sagra buqueti] and started to set up my equipment to shoot. Only then, I was fascinated by the aggressive looks of this beetle through the lens, even though these beetles aren't aggressive, considering they are leaf beetles.
With the right lighting effects, the shown work was produced.
"

 

Well, thank you, Dr. Posselt and Mr. Al Habshi, for sharing your beautiful pictures and their stories!
To you, pal reader, ciao!


Ahem...
I feel compelled to intervene in order to specify a thing about which, often, there is confusion, and that this cheap blog, obviously, given its intellectual sloppiness, has not made sure - as indeed it would have been appropriate to do - to make clear.
I proceed, then.
The often times shared misconception is this: spiders are insects.
They definitely are not.
Spiders are arachnids.
Spiders and beetles are both, in fact, arthropods, but the former are, as just mentioned, arachnids, while the latter are insects.
In short:
Spiders - Phylum: Arthropoda; Class: Arachnida
Beetles - Phylum: Arthropoda; Class: Insecta.

You are welcome.

William Will Whatever

Monday, October 31, 2022

PiPS vs HALLOWEEN!

PiPs vs Halloween! (by @sciencemug)
PiPs vs Halloween and two larvae-ghosts (by @sciencemug)
Pumpkin free pic by LOGAN WEAVER | @LGNWVR, blue door free pic by Aziz Acharki, dark corridor free pic by charlesdeluvio (source of the three images: Unsplash).
Orange larva pic by Bruno Vellutini is under an Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0) license (source: flickr).
Larva pic by Alvaro E. Migotto is under an Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)
(source: Cifonauta).
All images adapted by @sciencemug.
PiPs cartoons by @sciencemug.


The two spooky-cute creatures (they're larvae) deserved a Halloween cartoon, don't you think?

And if you now want more halloweenish creatures, well, go find seven of 'em here (courtesy of the Smithsonian Science Education Center (SSEC))!

Enjoy!

 

PS

I agree, dear reader, very few words are indeed scarier than "Booo!" (well, probably any German word pronounced out loud is - for those who don't speak the language at least), but, for the joke sake, I had to pick one, and "unprovoked illegal genocidal colonialist war started and waged by a nazi-fascist terrorist nuclear imperialistic state among other atrocities purposefully and cynically putting millions of people in the poorest parts of the world at risk of starving", well, was too long to fit in the cartoon. And it was more than one word. And it's actually more enraging than scary.
So I chose "Inflation".

Saturday, October 22, 2022

THE TRUE REASON WHY SOME SEABIRDS RIDE INTO THE STORMS!

Seabird (streaked shearwater) vs paparazzi (by @sciencemug)
Seabird (streaked shearwater) vs paparazzi (by @sciencemug)
Streaked shearwater pic by
Tony Morris, is under CC BY-NC 2.0 license (source: flickr); all pics adapted by @sciencemug

Sooo, pal reader, a bunch of researchers from the UK and Japan use the Global Positioning System (aka GPS) to track, for eleven years, adult streaked shearwaters in the Sea of Japan (Lempidakis et al., 2022 (P)).
Shearwaters are indeed pelagic seabirds, meaning the kind of winged creatures that live a big chunk of their lives on open oceans and seas (others are, for instance, albatrosses and puffins).
Now, dear reader, you're for sure asking yourself why
our science people look into these specific birds' movements, and in that particular location?
Well, buddy, it has to do with the fact that the brains are interested in cyclones (aaand, no pal, although they can and do kill humans, cyclones are not a new model of humans-killing machines designed for a reboot of the Battlestar Galactica saga maybe with a less crappy finale than the last one, not that that'd require a big effort, given the high level of crappiness of said finale, by the way)

Seabird (streaked shearwater) vs cyclone vs cylon (by @sciencemug)
Seabird (streaked shearwater) vs cyclone vs cylon (by @sciencemug)
Streaked shearwater pic1 &
pic2 by Tony Morris, and Cylon pic by GogDog are under CC BY-NC 2.0 license (source: flickr); all pics adapted by @sciencemug


and in how seabirds react to these billions of bucks atmospheric damage making phenomena that provoke, among all the rest, "mass mortality and strandings" (P) (given that tens of thousands of individuals can be wrecked in the worst cases) in, precisely, seabirds.
Now, as per the location, the Sea of Japan is "the world’s most active cyclone basin"
(P), and as per the birds, well, shearwaters breed also there.
Ok reader, so the UK+Japan researchers know, from the few studies available on the subject, that seabirds usually go around or above the higher speed winds of the cyclones, even flying up to 400 – 600 km from their routine foraging area not to meet & greet them. But, after analyzing their collected data, our science dudes find out that shearwaters' "response to cyclones [varies] according to the wind speed and direction. In strong winds, birds that [are] sandwiched between the storm and mainland Japan [fly] away from land and toward the eye of the storm, flying within ≤30 km of the eye"
(P).
In other words, dear reader, when between a rock (the cyclone) and a hard place (the land), our mighty shearwaters dive into the former, right toward its eye. Like that, our skilled birds face the strongest winds (with a speed ≤75 km/h), yes, but they manage to avoid the powerful onshore winds and therefore reduce the deadly risk to be pushed toward land and be hit by some flying bullet-like debris, or
crash down or, once landed alive, be unable to take-off again and then becoming an easy target for predators like crows and raptors.
Shearwater - that are "relatively small, weighing some 580g, and typically fly with airspeeds up to ~ 14 m/s [~ 50 km/h]" (P) - are able to ride the cyclones that way 'cause they are able to exploit strong winds tanks to "their use of dynamic soaring flight, which enables them to extract energy from the vertical wind gradient and fly at low metabolic cost" (P).
Their storm-facing strategy, however, is linked, as mentioned, to the speed and direction of the winds and the location of the land. Therefore, it seems adult shearwaters probably can count on a "map sense, which would be required for knowledge of the distance and direction to [said] land"
(P), and such navigational skill would be the result of a selective pressure. Proof of that would be that juvenile exemplars of shearwater should be less able to successful face cyclones as they lack that map sense, and can just count on their innate compass bearing to migrate. And the fact that young shearwaters "appear to be particularly susceptible to being wrecked after storms [...] [(]although the exact cause of wrecking and/or mortality is unclear[)]" (P) seems to confirm this hypotesis.

Aaaaanyway, dear reader, this dumb blog, in the following cartoon, shows you its idea about the actual reason why our resourceful seabirds, the streaked shearwaters, do their thing into the storms...

Seabirds (streaked shearwater & albatross) vs cyclone (by @sciencemug)
Seabirds (streaked shearwater & albatross) vs cyclone (by @sciencemug)
Streaked shearwater pic
by Tony Morris, and albatross pic by GRID-Arendal are under CC BY-NC 2.0 license (source: flickr); free storm pic by Daniel Lerman (source: unsplash); all pics adapted by @sciencemug

Wednesday, October 5, 2022

COMING SEMI-SOON (OR SO): OF WOUNDED CHIMPS, SELF-MEDICATION VIA INSECTS, & EMPATHY!

 

Screaming psychedelic chimp (by @sngshp)
Screaming psychedelic chimp (by @sngshp and @sciencemug)
Chimp img is a free pic by mwangi gatheca (source: unsplash)
adapted by sense_not_given and @sciencemug

Listen here

Aaaand, PiPs, this is pixel number one billion one hundred and eighteen million thirty-five thousand five hundred ninety-two, that is the last darn one of this heck of a digital puzzle you won last summer at that raffle organized by that shady group of NFTs that looked like a mix between a hamster suffering from colitis and a space monster with a very bad case of pink eye...

And, I've to say that the fact that this thing portray a giant middle finger tattooed with the word "u gigantic moron", and with in the background a horde of wildly laughing walruses fin-pointing at the viewer, well, you know, somehow vaguely irritates me.

Anyway, dear listener, now that my gang and I are done with the utterly urgent aaaand totally useless things, we can finally go back and tell science stories!

The coming not-so-soon next one is about wounded chimps, insects implemented self-medication, and hints of empathy!

Ciao!

 

2023/07 Update

Yeah, well dear reader/listener, PiPs here to tell you, see, that quantum stuff happened in copious abundance which cluttered the "time-to-do-this" highway SM was running its wheels onto, so... Yeah, well, sorry, but no way it is able to fulfil the prophecy of the "coming not-so-soon".