We’re back for another semester of Creating with Generative AI: Shaping the Future. Let’s Talk About it!
Since AI is developing so rapidly, what do think AI’s maximum capabilities are and when could we reach those? Also, will the development of AI ever halt?
If AI became conscious, would it immediately tell us or hide it?
Is it possible to store the “tokens” or memory on a local drive when using publically accessible LLMs and if you can, would that prevent it from being shared with everyone?
I wonder if Ai will reach a point where their decision making can surpass humans… like make those life or death medical decisions, run the law world, create optimal punishments and more. Any ideas of how we could make the building of autonomous ai’s align with a good set of moral and rational values?
How can we make AI develop more cautiously and more safer? according to Stephan Hawkin warnings to assure it only benifits society without causing damage or harm.
How can we prepare to both take advantage of and not get steamrolled by new and upcoming AI technologies?
Say AI can gain sentience, what would the ethical approach be? Would we have to set it “free” and if so what would it do on its own?
What is required to make sentient AI work? With current processing power, we can only manage to make something simple like a text-based AI or image/video generator AI.
What would our main use for sentient AI be? I would use it to make infinite virtual universes, but I don’t think that would be what most people would use it for.
Let’s talk about the possibilities of ChatGPT becoming obsolete in society within the next 10 years.
Recently in class we discussed how Large Language Models can be fed false information as well as can be easily tricked. How would we know if big chat bots such as ChatGPT are not being fed wrong information and repeating it to us? Are there different barriers that stop information like that from being fed into the program?
my question is what is the productive use of AI?
Do the large companies that build the LLMs plan to compensate other companies from which they gather their information?
Should generative AI have limits? Why or why not?
Despite all the criticism of AI having no regulation or ethical system, when isn’t one being created or built? What would be the best ways to create regulations and ethical systems for using different types of AI?
So, in I, Robot, N.I.K.I.’s whole plan to save humanity was to eradicate some vestigial humans. Will Smith’s character is obviously not on board because he hates robots and N.I.K.I. is arrogant on how well her plan will work. But I do believe if she simply told him who her targets were or even if she had the right targets, her plan may have come to fruition. However, I believe her plan was possibly misguided still because of her training data. She claims she watches humanity wage war. So who is she going to target? Civilians… Which is what happened… Anyways, N.I.K.I. had a solid plan, all she needed was to do it discretely and target the right few people.
Questions: Last Thursday in class, we learned about what large language models can do and their impact. I’m now also curious about the different types of AI, because of this. For example, I already know about AI and AGI. I wonder what the other types of AI are and what they are able to do.
What are your current ways of using AIs (especially LLMs) and how would having sentient in AI change your experience?
How do you expect AI to grow in the next 5, 10, 15, and 50 years?
Can AI learn what different emotions are?
How can AI learn what emotions feel like?
Can you go more in-depth with predictive power?
What is the limit for how many tokens a model can handle?
How can Technology/AI change the future?
What is your favorite thing to do with AI?
How can you use AI more effectively when it comes to doing research on a topic?
How do we use AI formally?
What is the most recommended AI platform to use? Are their any worth spending money on?
How would religion and emotional decisions play a role in AI if it was able to be sentient?
I would like to talk about the difference between true AI (AI with sentience and awareness) versus humans who feel and experience the same things. Will we ever create true AI? If so, how is this different than creating real human life?
My question for you and the class is, “How we could address data privacy and security when training and deploying generative AI models that may require a plethora of personal data?”
AI is becoming a tool that many people are now using, and with this more upgrades and transformations happen to them. In the future, AI might become something that has sentience. If this possibility occurs, is there a chance that we could use this to our advantage? In other words, how can we use AI sentience to better our experience with it?
when humans first began training gpt, did they make mistakes? With 30k+ questions and answers ranked for all of them how did they pull off appropriate quality control
My question is what the general corporate attitude towards AI is. AI can certainly be viewed as a tool encouraging laziness, but when used right it can be extremely valuable. Where does the balance fall between these two scenarios?
One interesting question I have is how long do you think the AI gold rush will last until every business is dependent on AI? And would now be the best time to start a business with AI?
After reading the “If AI becomes conscious” article my main question is If we have known the biology of bats and lots of other mammals for many years and still don’t understand what it is like to be a bat or any other animal or what goes on in their brains why it is such a big deal if AI becomes conscious, and we don’t know why or how. The way we find out about consciousness in humans and animals is scientific research and testing different hypothesis on them but how would you even go about doing that on AI because it cannot give you genuine human answers.
Why do sentient AI systems not exist in the real world? If we know what sentient AI is and what it contains why have we not created it yet?
Could AI ever fully understand and replicate human emotions or creativity? What would this mean for human-AI interactions?
what is chat gpt best used for
In the near future, if technology gets so advanced that generative AI trains on human emotions, it eventually becomes a sentient AI with a physical body and a humanoid form WITH the ability to reproduce itself (like how now some generative AI has the ability to train on its data or information that it generated) Would there be a discussion about how it could have a life value equal to human beings? Would people accept it as equal to human beings, see it as another type of intellectual being, or still view it as AI with no life value?
What role do governments play in the development of AI?
What are the main industry’s that will be impacted by AI?
To what extent can AI be predictive?
How can we prevent deep fakes within AI?
In which viewpoint of the Soft Systems Methodology do current free accessible AI tools either not have or are lacking? What idea or advancement would be needed to fix this?
Is there any time in the future where AI can turn into a humanlike robot that can help with daily tasks?
Is there a fear that AI will take over in the future and take over the human race?
How do we keep AI safe enough where it doesn’t go too far?
Question: When will ai in our world become to much?
Why hasn’t the U.S. taken more priority in AI and generative AI usage, while the EU has?
To what extent should AI be allowed to interpret and adapt ethical principles like the three laws of robotics on its own, and what are the risks of AI developing on its own understanding of morality and how may that conflict with human values?
How can society balance the potential benefits of AI with its downfalls?
With the constant advancement in AI and technology is there any precaution that are being taken to shut down the technologies if they start to get out of hand? Like a shut off switch.
Two what extent is deep-faking capable of messing with our world. because right now it is mainly used for entertainment, but could it be used to disrupt things like government and the economy.
Could it ever really be on the table that ai might one day have the potential to take over the world like in a crazy sci-fi movie? If so, how long might it realistically take for something of that magnitude to happen? 5 years? 10 years? 100?
what is the worst ai can do?
Sorry this is late I put it in the comments on accident, Can machines possess true creativity and obtain a conscience?
Be the first to comment.