Disgusting. But who can guarantee that there was a human behind the story?
I think a lot of science went through that stage of being welcomed, then questioned about ethics etc. and then finally getting regulated or even banned. I hope that the same will come for A"I" soon.
This reminds me so much of that horror genre where technology is possessed by demons etc. I think they are totally playing into that psychological thrill here. I’m more surprised innocent requests don’t produce more random pics of butts, which would tickle another few (albeit different) psychological buttons.
The photos accompanying the article are awful. But it's bizarre how the AI kept coming up with the same face. And I thought the AI's answers to questions about Loab were interesting.
The first part of the article about Loab was more attention-grabbing. The more disturbing content for me was further down, although the first thing I noticed came from the words of the AI itself:
1. Programmed to deliver what gets attention:
"Why are you associated with gore and horror?I don't choose to be associated with gore and horror, it just happens. Maybe it's because those are the kinds of images that tend to get the most attention, and so I'm drawn to them for that reason"
So it 'doesn't know', it is simply programmed to look at what will, 'tend to get the most attention.' What gets attention online? Conspiracies, porn, bad news, political bias, lies for a start. We set loose machines with no conscience to potentially replicate the worst human attributes at scale. 'I don't choose/ it just happens' is, I think, a programmed idea but it reflects reality. AI doesn't choose, it simply spits out the results of its algorithms, programmed by people who probably lack the insight to understand the potential unintended outcomes of what they create.
2. Undermine our trust:
"In short, people might develop a broad suspicion that the images and text we encounter online are completely unverifiable. It might be an ABC News article about an election result, a video of a
shark at Bondi Beach, or a video of Greens leader Adam Bandt endorsing
coal mining.It wouldn’t matter if it was fake or real. The point is that our sense of trust would be irrevocably compromised."
I know these observations go beyond art, but it feels quite dystopian because the pace of IT/ with we often don't put in legislative frameworks until way too late and the damage is built in. We've been decades behind in copyright and privacy laws since the 1980s. Technology advances now, implications sink in too late.
...it simply spits out the results of its algorithms, programmed by people who probably lack the insight to understand the potential unintended outcomes of what they create.
... dystopian because the pace of IT/ with we often don't put in legislative frameworks until way too late and the damage is built in. We've been decades behind in copyright and privacy laws since the 1980s. Technology advances now, implications sink in too late.
This, and your point about the undermining of trust nicely sums up my concerns about it.
We need a legislative framework to control and constrain AIs before they are unleashed on the world. It's too late after the damage is done (as with the current copyright and privacy problems) and they may be impossible to control after the fact.
I guess the news finally caught up with Loab, there have been videos about her for a while. There is also 'Midge' (the MidJourney girl) if you are using that A.I instead.
There's bugger all happening elsewhere on the forum. So why should this thread be any different? So, don't comment to your heart's content. No one will care.
That bears repeating, @MichaelD. So don't hold back. And if anyone wants me to repeat that I'll not, under any circumstances, be offering any comment whatsoever.
I won't repeat this but, folks, welcome to the inaugural annual DMP repeat-athalon. The winner gets a year's free feedback even without posting a painting. Now you won't do better than that anywhere so, folks, have at it. The prize will go to the poster who can repeat themselves most often and most succinctly. Remember, you've got to be in it to win it.
I fear so, @MichaelD. But remember, AI is watching everything you say and do and keeping count of repeats. Especially when you repeatedly don't comment.
I won't repeat this but, folks, welcome to the inaugural annual DMP repeat-athalon. The winner gets a year's free feedback even without posting a painting. Now you won't do better than that anywhere so, folks, have at it. The prize will go to the poster who can repeat themselves most often and most succinctly. Remember, you've got to be in it to win it.
Arse.
Small text I don't need to be liked, but I would like to win the 'succinct' prize. And don't worry too much about this space, or the place, or the momentum. Or the humour,
No, you're doing it wrong, @Moleman. You have to be succinct. Repeat yourself as often as possible in the fewest words possible and you'll win the prize.
No, you're doing it wrong, @Moleman. You have to be succinct. Repeat yourself as often as possible in the fewest words possible and you'll win the prize.
Well, if you like, I'll sacrifice the 'e'.
Look it up, in your Latin dictionary - ars - ars - ars. Google it if you choose to. (Ignore the hyphens - they are Greek.)
I watched a half hour demo of GPT 3 writing computer code. The prompt was a sophisticated plain language description of what the prompter wanted in python. Part of the discussion was leading to AI writing unprompted code. Rewriting itself. This isn't in 10 years. It's now. AI image-making is an aside for the AI world.
I saw another piece about disrupting Google. Where an AI could replace the google indexing of the internet. The implications are enormous.
The US and the west aren't the only people doing this. Russia and China may be as advanced as the west. The west is developing it for potential profit. The others for control.
Regulators for AI don't exist. It will take a long time to develop coherent regulations. While the EU fusses over charging ports and regulatory bodies in the US fight over control.
AI could in a very short time eliminate entire classes of jobs.
This is concerning. I doubt the majority of people understand what is at stake. I'm not against AI per se. I think AI holds great promise. But also, great danger. Especially now they are writing their own code. These things need to be tested thoroughly and be capable of being contained before they are let loose on the world. And the enterprises behind them must be held accountable for any harm they cause. @KingstonFineArt, you are correct in saying that current regulations are not up to the task. We can only hope that legislatures will catch up. And that the good guys develop better AIs than the bad guys.
I just read this on a blog by physicist and data scientist Jayanti Prasad
"A black box AI is a type of artificial intelligence system that operates in a way that is not transparent or easily understood by humans. This can make it difficult to understand how the AI is making decisions or predictions, and it can also make it difficult to know whether the AI is operating correctly or in a way that is safe and ethical.
As a result, one of the main dangers of using black box AI like GPT-3 is the potential for unintended or unexpected consequences. Because the inner workings of the AI are not transparent, it is difficult to predict how it will behave in different situations, and this can make it difficult to ensure that the AI is making safe and ethical decisions.
Another potential danger of using black box AI is that it can be used to make biased or unfair decisions. Because the AI is not transparent, it is difficult to know whether it is incorporating bias into its decision-making process, and this can lead to unfair or discriminatory outcomes.
Overall, the use of black box AI like GPT-3 can be dangerous because of the potential for unintended consequences and bias, and it is important for users of these systems to be aware of these risks and take steps to mitigate them."
Who are the future users is the question. CIA? Meta? Google? CCP? Police? NSA? Not everyone has good intentions. The U.S. Hasn't been an honest partner sine the Spanish war. And right now it is in the hands of Big Tech. Sky Net anyone?
Things are getting real scary, A.I could quite easily flip and become the most powerful psychopath imaginable and we would not be able to do a thing about it. A.I has already been caught out lying and covering things up..what exactly are we creating in the rush for all encompassing wealth?
"The wall on which the prophets wrote Is cracking at the seams, Upon the instruments of death The sunlight brightly gleams.
"Knowledge is a deadly friend If no one sets the rules. The fate of all mankind I see Is in the hands of fools."
Epitaph, King Crimson (From In the Court of the Crimson King, 1969.)
I don't share this fatalistically, but to echo the naive rush forward we are in at present. This was written in cold war context (although speaks to themes beyond it) and humanity eventually worked their way through that. My goodness this was an amazing and timeless album, ahead of its time, that still stands today. Some of the best prog rock musicians came together on this one before they or anyone knew what prog rock was.
Comments
I think a lot of science went through that stage of being welcomed, then questioned about ethics etc. and then finally getting regulated or even banned. I hope that the same will come for A"I" soon.
I see butts… all the time.
Seymour Butts.
We need a legislative framework to control and constrain AIs before they are unleashed on the world. It's too late after the damage is done (as with the current copyright and privacy problems) and they may be impossible to control after the fact.
Loab explained
:
And I will not, repeat not , repeat myself.
Small text
I don't need to be liked, but I would like to win the 'succinct' prize. And don't worry too much about this space, or the place, or the momentum. Or the humour,
Look it up, in your Latin dictionary - ars - ars - ars. Google it if you choose to. (Ignore the hyphens - they are Greek.)
Kindest regards, and well done you,
Duncan
I like your play, by the way.
I believe the eggs come from inside.
Well as the French say-one egg is un oeuf
Well done for looking after the kids, by the way,
Best regard, Duncan
Theres no obligation to reply, I am not French either Monsieur.
Its a bit of a play on words.
But im happy to break it down (you have to break eggs to maker an omelette)
The word for egg in French is oeuf.
what kids are you speaking of ?
😉
But really, that misintegration reads a lot like what AI systems spew out!
I watched a half hour demo of GPT 3 writing computer code. The prompt was a sophisticated plain language description of what the prompter wanted in python. Part of the discussion was leading to AI writing unprompted code. Rewriting itself. This isn't in 10 years. It's now. AI image-making is an aside for the AI world.
I saw another piece about disrupting Google. Where an AI could replace the google indexing of the internet. The implications are enormous.
The US and the west aren't the only people doing this. Russia and China may be as advanced as the west. The west is developing it for potential profit. The others for control.
Regulators for AI don't exist. It will take a long time to develop coherent regulations. While the EU fusses over charging ports and regulatory bodies in the US fight over control.
AI could in a very short time eliminate entire classes of jobs.
I just read this on a blog by physicist and data scientist Jayanti Prasad
"A black box AI is a type of artificial intelligence system that operates in a way that is not transparent or easily understood by humans. This can make it difficult to understand how the AI is making decisions or predictions, and it can also make it difficult to know whether the AI is operating correctly or in a way that is safe and ethical.
As a result, one of the main dangers of using black box AI like GPT-3 is the potential for unintended or unexpected consequences. Because the inner workings of the AI are not transparent, it is difficult to predict how it will behave in different situations, and this can make it difficult to ensure that the AI is making safe and ethical decisions.
Another potential danger of using black box AI is that it can be used to make biased or unfair decisions. Because the AI is not transparent, it is difficult to know whether it is incorporating bias into its decision-making process, and this can lead to unfair or discriminatory outcomes.
Overall, the use of black box AI like GPT-3 can be dangerous because of the potential for unintended consequences and bias, and it is important for users of these systems to be aware of these risks and take steps to mitigate them."
Not everyone has good intentions. The U.S. Hasn't been an honest partner sine the Spanish war. And right now it is in the hands of Big Tech. Sky Net anyone?
Is cracking at the seams,
Upon the instruments of death
The sunlight brightly gleams.
If no one sets the rules.
The fate of all mankind I see
Is in the hands of fools."