Stephen Hawking’s analysis of AI is interesting, especially since he made those remarks a decade ago. While I share his concern, he did not hypothesize the real impacts of AI superseding a large number of humans. What are the primary beliefs about the results of this shift?
I think there will be a point in time where AI will become conscious and we might be closer to it then we really think. But I don’t think it will get to the point where they take over such as the I robot movie. I think that humans are in control of all these AI things. I can see it going bad if it is used in a bad negative way but at the moment I don’t see any negatives to it.
I thought it was fascinating what came up in class when someone said that the older platform of OpenAI or ChatGPT I believe tried to stop itself from being deleted because it is programmed like humans which have survival instincts. I was confused how this actually would work technically if humans have controls over the system and platform that ChatGPT is rolled out in? I thought it makes Stephen Hawking ideas and attitude towards AI more convincing if it can act in this way.
How will AI have an impact on learning?
What measures do you think can be taken in order to make generative AI governable? When should these measures be taken if the hope is to prevent the prospect of widespread harm?
I thought it was interesting that the “Complete Beginner’s Guide to Generative AI” noted how there is currently a lack of regulation for AI. I wonder if there will be more government regulations in the future as AI will keep expanding and changing the way many do day-to-day work.
How will AI impact artists in the future?
How will AI impact certain jobs, such as finance or marketing?
Does making a unique and detailed AI image, video, etc. make a person an artist?
If you make something solely with AI, can you claim it as your own work? If not, will this change in the future?
So if AI is replacing all these tech joba what are workers in tech going to do to combat this problem?
In the last reading assigned, “If AI becomes conscious, how will we know”, they talked about the different factors of that makes something conscious. But at the end it states: “The problem for all such projects, Razi says, is that current theories are based on our understanding of human consciousness. Yet consciousness may take other forms”. I believe this to be true. I don’t think AI can ever truly have the emotions that humans and animals have which makes it easier to make decisions and be subjective. But this brings up an interesting question of what if the lack of emotion and therefore being highly subjective can lead to AI being conscious in a new way. Meaning, if AI does become conscious, it could incorporate the need for perfection into its decision making which would be impossible to do if emotions were involved. AI would be able to make decisions and perform tasks in a way that the average human would not be able to do since their emotions and personal bias would get in the way.
With the fast-paced innovation of AI and how it’s affecting the filmmaking industry from an employment perspective. Do you think that overall the economy that’s been built within filmmaking will improve/thrive, or since layoffs and other obstacles that come with innovation in AI, this economy will be largely hindered? Maybe a bit of both?
If we are so concerned about AI being sentient, then why are some people actively trying to find ways for machines to think like humans? Does the world need the ability for something like a language model to interact with real emotion and know of its existence? It seems they are more useful as tools for assisting human tasks.
I’m interested in what specific Ai we are going to use and what we are going to do with it?
Will we be doing more research on ai, or doing things like using ai to create things? or both?
The movie that I watched for the assignment on Monday was “Flight of the Navigator”. In this movie the main characters deal with the potential good and evil that AI can posses. As a society what limitations or restrictions, if any, should we place on AI to limit the potential damage it could do in the hands of the wrong people? How can we balance restriction with freedom in order to create safety without limiting creativity and potential for technological advancements?
I’m really interested in learning more about DeepSeek and how it affects the overall AI market and its overall future more specifically within the US. I am also interested in Stephen Hawkings quote the AI will end the world and how much of a following that theory has overall
If AI’s ever have the ability to become sentient, could they also replicate the sentience of the user using the application?
The idea of sentient AI is fascinating, and fuels debate over whether AI will ever be able to feel emotions and become creative in its thought process. Will this ever be a possibility? What may be some policy/social implications if this were to happen? Do Humans have the ability to control artificial intelligence if or when this happens?
How can AI impact our society in the near future, on a more advanced level than it’s already at?
With the new DeepSeek software coming out and being shown as a lot more efficient, how do we think that US companies (OpenAI, Google Gemini) will respond?
Will we be learning how to create an Ai this semester? ~ answered
thinking about global safety, maybe a series of Ai’d capable of a sort of governing system that oversees all Ai’s around the world. If we had one source of “connection” or origin that powers all versions of Ai anyone can access than it would be easier to implement this kind of system. Does this seem like a possibility or something that can’t be controlled at this point?
What are the potential dangers of generative AI?
In the reading Complete Beginner’s Guide to Generative AI, there is a paragraph that states Generative AI is a subset of machine learning. It refers to models that can generate new content (or data) similar to the data they trained on. In other words, these models don’t just learn from data to make predictions or decisions – they create new, original outputs.
Then should we recognize Generative AI’s copyright?
After reading about sentience in the context of AI, why is this discussion often framed in comparison to human intelligence?
I wonder if AI sentience is achievable, but I also wonder if it should be done after weighing the upsides and downsides of AI sentience.
How do people prepare machines for a world where machines will need to make ethics-based decisions for people?
Have you tried to code AI software before?
If Sentient AI were to become a reality, how would we clarify its responsibilities, at what point would there need to be regulations set in place, and what ethical frameworks would need to be established to ensure that AI would be a benefit for the society without causing harm?
As AI continues to advance, could there potentially be a point where it automates so many jobs that the consumer class can no longer sustain businesses?
If AI were to take over managerial positions, how would it avoid bias toward using more AI in its decisions rather than using humans?
With all of the concern about the dangers of AI becoming sentient, why has more regulation not been implemented in the developing years of AI Technology?
I really enjoyed the discussion about AI gaining sentience. Do you think AI could attempt to sabotage users in a way to do its own bidding? Weird thing to think about, but we as humans use manipulation techniques to get what we want and a machine with learning capacity could possibly do this eventually.
If ai is such a big part of the future why do schools get mad when you use them.
After reading “If AI becomes conscious, how will we know?”, I believe that AI will never become sentient. Although it is a scary thought to have AI take over the world, but I think real consciousness is something only living beings can have. AI may become more advanced in its responses to certain situations, but will not be able to feel or have a sense of self.
If Ai becomes fully sentient what would our plan of action be to combat the negatives?
How can I use AI to help me formulate better vocabulary in my essay’s?
After reading the article “If AI Becomes Conscious, How Will We Know?” it becomes clear that determining whether an AI is truly conscious or sentient is a complex challenge. Unlike humans, AI cannot be assessed through medical scans like MRIs, making traditional methods of detecting consciousness ineffective. Instead, identifying AI consciousness would rely on theoretical frameworks, requiring scientists to compare its behaviors to those exhibited by conscious humans.
This raises an important ethical question: If AI were to achieve consciousness, would it be morally justifiable to “kill” it? At that point, would it already be too late to put an end to AI definitively?
My questions: What would the “need” for humans be at that point (if AI was conscious)? Wouldn’t there come a time that it does everything for us and then there would be no use of even sending children to school or parents parenting because AI knows and does everything for them? This is a scary thought!
I found the article What Is Sentient AI? to be very intriguing since I had never thought about if AI did have feelings and emotions like we do as humans. It is a little bit difficult for me to wrap my head around that idea so if AI was sentient, realistically, what would change and how would this affect us? How would we move forward with interacting with it then?
In class today, we were reviewing the preliminary/pilot processes of artificial intelligence. The model consisted of a Generator and a Discriminator, which essentially mirrors basic human thought processes of having an idea and deciding whether or not to act on it based on another or a few other attributes. Whereas nowadays artificial intelligence is commonly Large Learning Models/Generative AI where it is continuously adapting to information put into it. I think this in a way mirrors natural human growth and brain development of how our decisions and response become more in-depth and influenced by all the information we have comprised along the way. This idea reminds me of Space Odessey 2001, where essentially the story line was of humans, with the help of artificial intelligence, reach their next level of evolution. My main question here is, are there things that we have watched in the movies or books that we have witnessed slowly coming to fruition? And if so, what does that indicate about our future as humans and are we promoting our ultimate extinction?
The biggest discovery was how , much energy is used to create responeses. I didnt quite understand the amount of water needed just to create one essay from chatgpt. This makes me more nervous about the worlds resources as well as how I might be impacting the rest of the world.
So… if artificial intelligence can write poetry, then does it mean robots will soon understand human emotions better than we do?
With how fast AI platforms have grown recently, are there any limits to what future AI models can do?
The idea that you shouldn’t believe everything you read on the internet has been around for a while. However, in my experience most people don’t do first hand research and just believe most things they read. With the implementation of Generative AI I would only assume this problem is every growing. Since neural network learn with more user engagement, is there safeguards against someone feeding the model misinformation repeatedly causing a model to be less accurate, or does a data lake always take president over new information?
An argument for why AI can’t be sentient is that sentiency can’t be programmed/ stored in a computer. Do you think that there could be a potential way to create an algorithm that may not give a direct effect of sentience, but a jump start into a domino effect that could lead AI to becoming sentient? If so how do you think this will effect warfare between countries?
have governmental regulations been proposed to make AI more secure and safe?
Be the first to comment.