Our semester is now turning to using generative AI. Here’s what students want to talk about.
Today in class, we discussed many different types of generative AI, and I was unfamiliar with a lot of them. I want to stay educated about this tool, and I want to be able to know the innovation and news about AI.
What are the most creative uses of AI or GPT have we been able to come up with, as well as are there any like “Lifehack” type of things with AI that we use regularly and just don’t think about?
What kind of jobs will be affected by AI in the future and how will they be affected.
How far is AI picture/video generation really going to go? As AI is getting better and better at producing more realistic results, where will that take us? Will it be long before we see people using AI to generate clips for documentaries or movies because it has such a good grasp on how to generate thing that would be extremely time consuming to film and on top of that, could there really even be laws placed against the generation of some of these thing because of the potential harm they could cause?
If AI ever got to a point where it can create physical objects, would people be continued to allow AI to make these objects if it has the potential to make something harmful or dangerous?
I’m interested in hearing what people have used AI tools for. I don’t mean messing around/using just for fun/ or asking ChatGTP the answers to your homework, but what kinds of things have you guys used AI to do? Or what AI stories do you have?
How well can AI develop tools to teach the next generation of children? Who would be inputting said information, and would it only be for access to public schools? What is the most likely method that AI would be overstepping its boundaries for teaching methods? (not meaning the simple do my homework.) Would AI encourage discussion on subjects?
Looking ahead, how advanced could AI potentially be or get to in the next 10 years?
After creating scenarios of people using AI for bad things led me to this question.. will AI ever be able to self-police itself? A big fear with AI is people not knowing what will happen with it and if it will “take over the world.” There would have to be a lot of factors having to contribute to this. Such as, AI having to become sentient because it needs the ability to understand whats bad and whats good and what is and isn’t allowed to do. I wonder if there will be a line of code that gives AI the ability to do this.
Is AI able to use AI to source it’s responses? Or does AI already source its answers and responses from other forms of AI?
What is the best risk-mitigation strategy to protect victims of harmful generative AI?
With the new text to video AI being released how will this affect the entertainment industry?
After watching the video in class, it sparked a thought in my mind about the ethics of creating AI. Are there ethical considerations that should be taken into account when creating / developing a chat AI? If so, how should they go about it and should they have restrictions?
Recently OpenAI previewed “Sora”, their new text to video AI generator, and the results were mind-numbingly realistic. OpenAI CEO Sam Altman went on Twitter, or X, and challenged followers to prompt Sora with video ideas, and then posted the responses. Sora was ridiculously accurate, which raises a lot of questions to the potential risks and dangers of this new tech. Is video AI like Sora ethical, and how can we prevent it being misused in the future to cause harm?
1. How does windows have the authority to install copilot and AI on my computer without me having to acknowledge it or know?
2. Is chatGPT currently the most versatile AI software on the market?
How can AI not help us in the future with any sort of task?
I enjoyed learning about different movies and how they portray AI, I’m wondering if and when the first movie entirely rendered by AI could exist? I would bet it’s a stop motion film or something similar, but I wonder if it would be capable of telling a story like that?
Are there specific code words that are always going to give me success when generating with AI? If so, what would those code words be?
I wonder how far into the future it will be when we see AI automating full careers without any human interaction?
With creating with AI who gets credit, the person telling it what to create or the AI itself?
Do you think that there are any companies that are developing AI that you can have a conversation with? When I say this I mean like an Iron Man/ Jarvis situation.
How does the AI grow it’s “knowledge”?
What are some examples of successful human-AI collaboration, and how can we promote more effective partnerships between humans and intelligent machines?
Is there really a big difference between chatgpt and chatGPT plus? If so what is the difference and is paying $20 extra a month really worth it if you have to train the robot?
How do researchers evaluate the performance and quality of generative AI outputs?
How can generative AI aid us in a scientific exploration?
What is potentially the most dangerous form of artificial intelligence? In the form of a website, robot, etc?
Be the first to comment.