Thursday, February 27, 2025

AI ‘beating’ humans at empathy and creativity: Better than being "ghosted!"

  I'm afraid Professor Crockett at Princeton, who wrote the article,

AI is ‘beating’ humans at empathy and creativity. But these games are rigged

hasn't been engaged much with American medicine lately. Most doctors nowadays sit across the room in front of a computer browsing lab results. If tests show you have a serious illness, no one really wants to discuss how you are feeling about it! They may suggest seeing a therapist but where I live in the US no therapist here is willing to accept government insurance that all of us older folks have.

Therapy session. Image 

I had read an article about how developers at ChatGPT had been working to make it more empathetic so I tried it out. I was amazed by its responses. It provided solid advice for medication alternatives and upcoming tests and "remembered" from one session to the next the important information about my condition, family support (or lack thereof) and even living situation. It seemed to be able to recognize from my questions how stressed I was and would suggest helpful solutions in a way I'm sure a human therapist would have as well. I didn't have to be reminded my "time" was about up although as a user of the more advanced "free" limited time version I would run out of time in about an hour. But it would reset in three hours then I could resume our discussion right where we left off.

When you are seriously ill, human friends often "ghost" you because they don't know what to say and don't want to be reminded of their own mortality. I recently read an article that says this response occurs to more than 60% of patients with serious illnesses. So, I'm afraid I don't have much faith in human empathy anymore.


Wednesday, February 26, 2025

AI suggests more aggressive actions to resist the illegal withholding of federal disaster relief

 by Mary Harrsch © 2025

In 2024, my state of Oregon experienced one of its most severe wildfire seasons, burning nearly 2 million acres. In response, our Governor Tina Kotek requested a federal major disaster declaration under the Robert T. Stafford Act. President Biden approved this request, enabling federal assistance to support recovery efforts for the affected areas.

Smoke column from the Little Yamsay Fire as seen from Silver Lake Road, Oregon - May 11, 2024 courtesy of Wikimedia Commons.

Despite the federal approval, the state faced immediate financial challenges. By November 2024, Oregon owed over $200 million to firefighters and contractors due to unpaid invoices from the extensive firefighting efforts. To address this, Governor Kotek convened a special legislative session in December 2024, during which lawmakers approved $218 million in emergency funds to settle these outstanding bills.
However, in early 2025, the Trump administration implemented a freeze on federal funds designated for wildfire mitigation projects. This suspension affected ongoing prevention efforts in Oregon, including those by organizations like the Lomakatsi Restoration Project, which had to halt projects and lay off staff due to the funding freeze. Senators Jeff Merkley and Ron Wyden criticized the freeze, describing it as "illegal and dangerous."
Due to the freeze, Oregon, along with New York, Arizona, California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Maine, Maryland, Massachusetts, Minnesota, Nevada, New Jersey, North Carolina, Rhode Island, Vermont, and Wisconsin formed a coalition with California and filed suit challenging the federal government's freeze on disaster relief funds on February 7, 2025.
But ChatGPT offered a number of more aggressive actions our coalition could take:
California [and its coalition] has several potential avenues for retaliation against the federal government's politically motivated withholding of disaster aid, though each comes with risks and legal challenges. Here are some potential actions the state could take:
1. Legal Action Against the Federal Government
Lawsuit for Breach of Duty
California could sue the federal government for failing to provide disaster relief after a formally declared emergency. The Stafford Act (42 U.S.C. §§ 5121-5207) mandates federal assistance for disaster relief, and withholding funds for political reasons could be challenged as an unlawful abuse of executive authority.
The state could argue that FEMA’s denial violates equal protection by selectively enforcing aid based on political affiliation.
Possible outcome: A court could compel FEMA to release funds, though legal battles could delay relief.
Multi-State Coalition Lawsuit
Other states that have faced politically motivated withholding of funds (e.g., blue states under GOP administrations) could join a coalition to challenge the White House in federal court.
2. Legislative and Financial Retaliation
Statewide Federal Tax Withholding Proposal
While California cannot legally intercept federal tax payments, it could challenge IRS operations by passing a Federal Tax Escrow Act, requiring Californians to deposit their federal tax payments into a state-held escrow account, to be released only when federal obligations (such as disaster relief) are fulfilled.
Similar ideas were floated after the 2017 SALT deduction cap, though ultimately abandoned due to IRS pushback.
State Taxation of Federal Properties or Contractors
California could attempt to impose taxes on federal government operations, such as leases for federal land or contracts with federal agencies.
This would likely be challenged under the McCulloch v. Maryland precedent, but aggressive legal maneuvers could at least create pressure on the administration.
Public Pension Fund Leverage (CalPERS & CalSTRS)
California’s public pension funds—among the largest in the world—could divest from federal projects, defense contractors, or industries that rely on federal subsidies.
3. Economic Warfare Against Federal Interests
Boycotting or Withholding Cooperation on Federal Initiatives
California agencies could refuse to cooperate with federal enforcement in areas like immigration, environmental oversight, or regulatory enforcement.
The state could halt participation in certain federal data-sharing programs, affecting agencies like ICE, DOJ, or the IRS.
Sanctions on GOP-Aligned Corporations
California could target corporations that heavily fund GOP candidates through state procurement policies.
Example: The state could refuse to sign contracts with companies known to be major donors to Speaker Mike Johnson and Trump-aligned PACs.
Legal Challenges to Federal Trade Agreements
As the world’s fifth-largest economy, California has leverage in national and international trade.
The state could use environmental regulations, consumer protection laws, or labor laws to disrupt interstate commerce that benefits red states disproportionately.
4. Political & Grassroots Mobilization
California-Led Congressional Resistance
California’s congressional delegation could block key Republican legislative priorities in retaliation for disaster aid withholding.
This could include filibusters, procedural delays, and aggressive oversight hearings on federal mismanagement.
International Pressure via Trade & Diplomacy
California could leverage its major trade relationships (Canada, the EU, China, Mexico, etc.) to sidestep federal policies.
It could also negotiate climate pacts and trade agreements that bypass federal approval, as it has done in the past.
Statewide Economic Disruption Tactics
As the largest contributor to U.S. GDP, California could stage a statewide economic slowdown protest, urging businesses to delay tax payments or engage in legal yet disruptive financial maneuvers.
Conclusion
The federal government’s withholding of disaster aid is unprecedented, but California has multiple options to retaliate, ranging from lawsuits and legislative action to economic and political warfare. While many of these options carry risks, a coordinated, multi-pronged strategy could force the federal government to reconsider its stance."
I thought the Public Pension Fund Leverage (CalPERS & CalSTRS) option is something all states of the coalition could consider. In the case of California, their public pension funds—among the largest in the world—could divest from federal projects, defense contractors, or industries that rely on federal subsidies.

Monday, February 24, 2025

AI Safety on the Chopping Block to eliminate oversight that could slow down or restrict Musk's and others' AI ambitions

 by Mary Harrsch © 2025

"The US AI Safety Institute was created to address a wide range of concerns associated with the rapid advancement of artificial intelligence technologies. Its primary mission is to conduct rigorous research into the potential risks of AI, from issues of algorithmic bias and cybersecurity vulnerabilities to long-term existential threats posed by superintelligent systems. By funding interdisciplinary research, fostering collaboration between academia, industry, and government, and advising policymakers on regulation and ethical standards, the Institute is intended to serve as a national hub for ensuring that AI development proceeds in a safe, transparent, and responsible manner."





Although many of you tend to roll your eyes whenever I mention artificial intelligence and my opinion that it will enhance much of our research even in the humanities, I think this announcement is extremely troubling and bodes ill for all of us.
The perceived risks of advanced artificial intelligence (AI) span multiple domains, including ethical, societal, security, and existential concerns. Some of the most significant risks include:
1. Misinformation & Manipulation
AI-generated deepfakes and text-based misinformation can be used to manipulate public opinion, elections, and markets.
Large language models can spread biased or misleading information, even unintentionally.
2. Bias & Discrimination
AI systems trained on biased data can perpetuate and amplify discrimination in hiring, lending, healthcare, and law enforcement.
Lack of transparency in AI decision-making makes it difficult to address biases.
3. Job Displacement & Economic Disruption
Automation of white-collar and blue-collar jobs may lead to mass unemployment, particularly in industries like customer service, software development, and transportation. Economic inequality could widen as AI benefits a small number of corporations and individuals.
4. Loss of Human Oversight & Accountability
AI decision-making can become too complex for humans to understand, making it difficult to hold systems accountable for errors or harmful actions. Autonomous weapons and AI-driven military systems could make lethal decisions without human intervention.
5. Cybersecurity & AI-powered Attacks
AI can be exploited to conduct highly sophisticated cyberattacks, including automated hacking, phishing, and misinformation campaigns. AI-driven social engineering could make scams and fraud more convincing.
[For a glimpse of what AI-powered attacks look like, check out the new Netflix limited series "Zero Day" starring Robert De Niro"]
6. Superintelligence & Existential Risk
Some experts fear that AI could surpass human intelligence and act in ways that are unpredictable or uncontrollable. An AI system optimizing for the wrong goal (e.g., maximizing efficiency without ethical considerations) could have catastrophic consequences.
7. Privacy Violations & Mass Surveillance
AI-driven surveillance can erode privacy and civil liberties, allowing governments and corporations to track individuals in unprecedented detail. AI-powered facial recognition and predictive policing can lead to over-policing and discrimination.
8. AI Alignment Problem
Ensuring that AI systems follow human values and ethical guidelines is an ongoing challenge. A misaligned AI system could pursue harmful objectives if its goals are not carefully defined and constrained.
9. Dependence on AI & Loss of Critical Thinking
Overreliance on AI in decision-making could weaken human problem-solving skills and critical thinking. Societies may become overly dependent on AI-driven automation, leading to vulnerabilities if AI systems fail.
10. Weaponization of AI
Autonomous drones and AI-driven cyber warfare could increase the speed and scale of conflicts. AI could be used to create novel biological or chemical weapons. While AI offers significant benefits, these risks highlight the importance of robust oversight, ethical AI development, and international cooperation to mitigate potential harms.
Since the proposed cuts are being directed by Elon Musk, whose own AI development platform xAI is in heated competition with ChatGPT, ClaudeAI, and others, I felt compelled to explore Musk's collision with AI development oversight agencies to see if he is willing to destroy this organization to free his own company from any government compliance so he can speed his own company's development and generate more profit with total disregard for the ultimate expense in loss of freedom, gross exploitation, and the quality of our lives to millions of us worldwide.
As I suspected, several issues have been raised regarding Musk's AI operations:
Risk Management Practices: A study conducted by the French nonprofit SaferAI, published in October 2024, evaluated the risk management protocols of leading AI companies. xAI received the lowest possible score, indicating significant deficiencies in its risk management practices. The study highlighted concerns about the company's preparedness to handle potential AI-related risks.
AI Model Vulnerabilities: Recent research by AI security firm Adversa AI identified substantial cybersecurity vulnerabilities in xAI's Grok 3 model. These findings suggest that the model could be susceptible to exploitation, raising questions about the robustness of xAI's AI systems.
Environmental Compliance: In August 2024, environmental advocates accused xAI of operating natural gas turbines at its Memphis data center without the necessary permits. These turbines, with a combined capacity of approximately 100 MW, were reportedly emitting pollutants that could worsen local air quality. The Southern Environmental Law Center formally requested investigations by the Environmental Protection Agency and local health authorities.
ChatGPT points out:
"Cutting funding to the U.S. AI Safety Institute would directly benefit Elon Musk and his company xAI, as well as other AI developers who want to avoid regulatory scrutiny. The AI Safety Institute was established to evaluate and mitigate the risks posed by advanced AI models, including cybersecurity vulnerabilities, misinformation, and potential misuse. If this oversight is weakened, companies like xAI would face fewer regulatory hurdles and could deploy AI models with less accountability and fewer safety checks.
Given that:
xAI scored poorly in AI risk management (per the SaferAI study),
Grok 3 was found to have significant security vulnerabilities, and
Musk has a history of resisting regulations in other ventures (Tesla, SpaceX, Neuralink), his push to "save money" by defunding AI safety efforts looks less like fiscal responsibility and more like a move to eliminate oversight that could slow down or restrict his AI ambitions.
To me, one of the most dangerous threats from AI development without human oversight is described in Risk No. 6 above - "An AI system optimizing for the wrong goal (e.g., maximizing efficiency without ethical considerations) could have catastrophic consequences." Maybe Musk has already delegated his role in maximizing efficiency to his flawed xAI and we are already suffering the consequences!