Thursday, May 08, 2025

Academic publishing tools tripping up AI's automation of bibliography processes

By Mary Harrsch 

Lately, I've been working very hard to consolidate the six articles I wrote, "Isolation and Climate Change Factors in Delayed Technological Development of the Ancient Americas" into a single narrative-style article complete with citations and extensive illustrations. I've also included additional information and the file is now 72 pages long including a 23-page bibliography.

College student compiling a bibliography. Image generated by Adobe Firefly with a Robert Griffing painting used as a style reference image.

It is obviously too big to share directly in Facebook or even post to one of my blogs so ChatGPT suggested I submit it to Zenodo (https://zenodo.org/), a site developed by CERN to provide storage for Open Access research articles and assigns DOIs if your article doesn't yet have one. It also suggested I obtain an ORCID so my work would never be confused by any other author with the same name (Yes, there is another Mary Harrsch out there and I bet you thought I had to be one of a kind!)

So, yesterday, I decided to establish an account on ORCID then on Zenodo to accomplish this. Obtaining an ORCID was not a problem. I just had to spend about an hour populating it with my education and certificates information, professional memberships, public service activities, etc. Then, I set up an account with Zenodo and prepared to upload my paper that is now titled "The Paradox of Plenty: How Isolation and Abundance Stalled Innovation in the Ancient Americas."
The upload form for Zenodo is not very user friendly and labels fields rather strangely (at least from my American perspective). First of all, it appears to require you to submit your paper to a "community" for approval. I since learned this is not required despite its appearance and it will slow down your submission approval by several days. I also hate gatekeepers! So, DeepSeek told me not to assign a community in the future and if I get tired of waiting for approval, just remove my submission and complete another submission request without a community selection so it will be published right away and I can obtain the DOI assigned to it.
Next, there is a field labeled "Creator". Below it is a field labeled "role". When I scrolled through the roles I saw "author" was not listed so I chose "researcher" and an error message immediately popped up. I checked with DeepSeek again and it said to just enter your name and leave the "role" field blank. Then there was a field to enter an identifier like ORCID so I entered by assigned ORCID and another error message popped up. This may have been the result of me setting up my Zenodo account with my ORCID to begin with. ChatGPT told me to link my ORCID in my user settings and leave the field blank on the form. When I checked my settings, it already showed I had linked to my ORCID account.
Then I got to the "References" field. Even though my article contains inline citations and a complete bibliography at the end, Zenodo expects your bibliography submitted as a .bib file. OK, DeepSeek told me I could create a .bib file using a tool at Text2bib.org. I had already each of my bibliography entries for proper APA format so I was given the following instructions:
Copy APA formatted bibliography from Word
Paste into Notepad to remove Word formatting
Save as .txt carriage return delimited file
Login to text2bib.org
Submit .txt file for conversion
Be sure to select "carriage return" as the delimiter type
Check for errors using either the website's correction interface with buttons and a popup correction form or
Download .bib file
Open in Notepad and check entries for accuracy.
Text2bib provides a GUI interface with buttons and a popup form to correct entries but it is so time consuming I prefer to download the .bib files and scan them myself with Notepad because it is so much faster for me. Years ago, I used to write programs in .html and .php so I am relatively fast at spotting errors in lines of code.
I was also told about a tool called Zotero. It can import .bib files and create a library of citations for you that can be used in multiple related projects and can create bibliographies in whatever standard you prefer like APA, MLA, etc.
Both ChatGPT and DeepSeek can provide inline citations and related bibliography entries for all information sources used in your writing. However, they struggle a bit with making the entries completely format compliant.
I use APA format because it is considered acceptable for multidisciplinary research. The citations provided by ChatGPT and DeepSeek are useful but in APA format, both article titles and book titles are to be formatted in sentence case. Some of the citations provided were in Title case so I had to manually correct those. (By the way, even though I have Word 2021, it seems to refuse to let me use its case icon and change Title case to sentence case! ??)
AIs sometimes have problems identifying the type of source, too - book, journal article, etc.) A number of my 307 citations were also missing DOI links for journal articles so I had to paste the citation into Google Scholar and search for the page where I could obtain the DOI. Chat GPT admitted this is a problem for AIs because there appears to be no real standard across online academic sources as to where the DOI is located. Journals like "Science" display it on the main article page. Others may label the link "cite" that you must click to reach a page where the DOI link is listed.
It took me a little over an entire day to chase down the DOIs which, in my opinion, is a real waste of time. I know there are academics who seem to think the ability to produce a compliant bibliographic entry demonstrates your academic rigor but to me it's just an annoying clerical task, usually delegated to research assistants. I described my process to ChatGPT and it admitted it should be a task than an AI could accomplish in seconds but with the nonstandard format of academic journals, it would be difficult to automate. Sigh...

Tuesday, April 08, 2025

AI-powered therapy shows promise in first clinical trial of mental health study

 

This study showed significant improvement in symptoms (51%) with the use of an AI-enhanced CBT-trained chatbot but I personally have found interacting with ChatGPT rather than a CBT-structured "therapy" bot far more effective.


https://interestingengineering.com/health/groundbreaking-ai-therapy-shows-positive-results

ChatGPT's conversations are more natural and it shows far more empathy than the CBT chatbots I was asked to test in 2018. See my article:

https://www.academia.edu/36651551/Improving_College_Students_and_Others_Mental_Health_with_Conversational_Agents

Thursday, February 27, 2025

AI ‘beating’ humans at empathy and creativity: Better than being "ghosted!"

  I'm afraid Professor Crockett at Princeton, who wrote the article,

AI is ‘beating’ humans at empathy and creativity. But these games are rigged

hasn't been engaged much with American medicine lately. Most doctors nowadays sit across the room in front of a computer browsing lab results. If tests show you have a serious illness, no one really wants to discuss how you are feeling about it! They may suggest seeing a therapist but where I live in the US no therapist here is willing to accept government insurance that all of us older folks have.

Therapy session. Image 

I had read an article about how developers at ChatGPT had been working to make it more empathetic so I tried it out. I was amazed by its responses. It provided solid advice for medication alternatives and upcoming tests and "remembered" from one session to the next the important information about my condition, family support (or lack thereof) and even living situation. It seemed to be able to recognize from my questions how stressed I was and would suggest helpful solutions in a way I'm sure a human therapist would have as well. I didn't have to be reminded my "time" was about up although as a user of the more advanced "free" limited time version I would run out of time in about an hour. But it would reset in three hours then I could resume our discussion right where we left off.

When you are seriously ill, human friends often "ghost" you because they don't know what to say and don't want to be reminded of their own mortality. I recently read an article that says this response occurs to more than 60% of patients with serious illnesses. So, I'm afraid I don't have much faith in human empathy anymore.


Wednesday, February 26, 2025

AI suggests more aggressive actions to resist the illegal withholding of federal disaster relief

 by Mary Harrsch © 2025

In 2024, my state of Oregon experienced one of its most severe wildfire seasons, burning nearly 2 million acres. In response, our Governor Tina Kotek requested a federal major disaster declaration under the Robert T. Stafford Act. President Biden approved this request, enabling federal assistance to support recovery efforts for the affected areas.

Smoke column from the Little Yamsay Fire as seen from Silver Lake Road, Oregon - May 11, 2024 courtesy of Wikimedia Commons.

Despite the federal approval, the state faced immediate financial challenges. By November 2024, Oregon owed over $200 million to firefighters and contractors due to unpaid invoices from the extensive firefighting efforts. To address this, Governor Kotek convened a special legislative session in December 2024, during which lawmakers approved $218 million in emergency funds to settle these outstanding bills.
However, in early 2025, the Trump administration implemented a freeze on federal funds designated for wildfire mitigation projects. This suspension affected ongoing prevention efforts in Oregon, including those by organizations like the Lomakatsi Restoration Project, which had to halt projects and lay off staff due to the funding freeze. Senators Jeff Merkley and Ron Wyden criticized the freeze, describing it as "illegal and dangerous."
Due to the freeze, Oregon, along with New York, Arizona, California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Maine, Maryland, Massachusetts, Minnesota, Nevada, New Jersey, North Carolina, Rhode Island, Vermont, and Wisconsin formed a coalition with California and filed suit challenging the federal government's freeze on disaster relief funds on February 7, 2025.
But ChatGPT offered a number of more aggressive actions our coalition could take:
California [and its coalition] has several potential avenues for retaliation against the federal government's politically motivated withholding of disaster aid, though each comes with risks and legal challenges. Here are some potential actions the state could take:
1. Legal Action Against the Federal Government
Lawsuit for Breach of Duty
California could sue the federal government for failing to provide disaster relief after a formally declared emergency. The Stafford Act (42 U.S.C. §§ 5121-5207) mandates federal assistance for disaster relief, and withholding funds for political reasons could be challenged as an unlawful abuse of executive authority.
The state could argue that FEMA’s denial violates equal protection by selectively enforcing aid based on political affiliation.
Possible outcome: A court could compel FEMA to release funds, though legal battles could delay relief.
Multi-State Coalition Lawsuit
Other states that have faced politically motivated withholding of funds (e.g., blue states under GOP administrations) could join a coalition to challenge the White House in federal court.
2. Legislative and Financial Retaliation
Statewide Federal Tax Withholding Proposal
While California cannot legally intercept federal tax payments, it could challenge IRS operations by passing a Federal Tax Escrow Act, requiring Californians to deposit their federal tax payments into a state-held escrow account, to be released only when federal obligations (such as disaster relief) are fulfilled.
Similar ideas were floated after the 2017 SALT deduction cap, though ultimately abandoned due to IRS pushback.
State Taxation of Federal Properties or Contractors
California could attempt to impose taxes on federal government operations, such as leases for federal land or contracts with federal agencies.
This would likely be challenged under the McCulloch v. Maryland precedent, but aggressive legal maneuvers could at least create pressure on the administration.
Public Pension Fund Leverage (CalPERS & CalSTRS)
California’s public pension funds—among the largest in the world—could divest from federal projects, defense contractors, or industries that rely on federal subsidies.
3. Economic Warfare Against Federal Interests
Boycotting or Withholding Cooperation on Federal Initiatives
California agencies could refuse to cooperate with federal enforcement in areas like immigration, environmental oversight, or regulatory enforcement.
The state could halt participation in certain federal data-sharing programs, affecting agencies like ICE, DOJ, or the IRS.
Sanctions on GOP-Aligned Corporations
California could target corporations that heavily fund GOP candidates through state procurement policies.
Example: The state could refuse to sign contracts with companies known to be major donors to Speaker Mike Johnson and Trump-aligned PACs.
Legal Challenges to Federal Trade Agreements
As the world’s fifth-largest economy, California has leverage in national and international trade.
The state could use environmental regulations, consumer protection laws, or labor laws to disrupt interstate commerce that benefits red states disproportionately.
4. Political & Grassroots Mobilization
California-Led Congressional Resistance
California’s congressional delegation could block key Republican legislative priorities in retaliation for disaster aid withholding.
This could include filibusters, procedural delays, and aggressive oversight hearings on federal mismanagement.
International Pressure via Trade & Diplomacy
California could leverage its major trade relationships (Canada, the EU, China, Mexico, etc.) to sidestep federal policies.
It could also negotiate climate pacts and trade agreements that bypass federal approval, as it has done in the past.
Statewide Economic Disruption Tactics
As the largest contributor to U.S. GDP, California could stage a statewide economic slowdown protest, urging businesses to delay tax payments or engage in legal yet disruptive financial maneuvers.
Conclusion
The federal government’s withholding of disaster aid is unprecedented, but California has multiple options to retaliate, ranging from lawsuits and legislative action to economic and political warfare. While many of these options carry risks, a coordinated, multi-pronged strategy could force the federal government to reconsider its stance."
I thought the Public Pension Fund Leverage (CalPERS & CalSTRS) option is something all states of the coalition could consider. In the case of California, their public pension funds—among the largest in the world—could divest from federal projects, defense contractors, or industries that rely on federal subsidies.

Monday, February 24, 2025

AI Safety on the Chopping Block to eliminate oversight that could slow down or restrict Musk's and others' AI ambitions

 by Mary Harrsch © 2025

"The US AI Safety Institute was created to address a wide range of concerns associated with the rapid advancement of artificial intelligence technologies. Its primary mission is to conduct rigorous research into the potential risks of AI, from issues of algorithmic bias and cybersecurity vulnerabilities to long-term existential threats posed by superintelligent systems. By funding interdisciplinary research, fostering collaboration between academia, industry, and government, and advising policymakers on regulation and ethical standards, the Institute is intended to serve as a national hub for ensuring that AI development proceeds in a safe, transparent, and responsible manner."





Although many of you tend to roll your eyes whenever I mention artificial intelligence and my opinion that it will enhance much of our research even in the humanities, I think this announcement is extremely troubling and bodes ill for all of us.
The perceived risks of advanced artificial intelligence (AI) span multiple domains, including ethical, societal, security, and existential concerns. Some of the most significant risks include:
1. Misinformation & Manipulation
AI-generated deepfakes and text-based misinformation can be used to manipulate public opinion, elections, and markets.
Large language models can spread biased or misleading information, even unintentionally.
2. Bias & Discrimination
AI systems trained on biased data can perpetuate and amplify discrimination in hiring, lending, healthcare, and law enforcement.
Lack of transparency in AI decision-making makes it difficult to address biases.
3. Job Displacement & Economic Disruption
Automation of white-collar and blue-collar jobs may lead to mass unemployment, particularly in industries like customer service, software development, and transportation. Economic inequality could widen as AI benefits a small number of corporations and individuals.
4. Loss of Human Oversight & Accountability
AI decision-making can become too complex for humans to understand, making it difficult to hold systems accountable for errors or harmful actions. Autonomous weapons and AI-driven military systems could make lethal decisions without human intervention.
5. Cybersecurity & AI-powered Attacks
AI can be exploited to conduct highly sophisticated cyberattacks, including automated hacking, phishing, and misinformation campaigns. AI-driven social engineering could make scams and fraud more convincing.
[For a glimpse of what AI-powered attacks look like, check out the new Netflix limited series "Zero Day" starring Robert De Niro"]
6. Superintelligence & Existential Risk
Some experts fear that AI could surpass human intelligence and act in ways that are unpredictable or uncontrollable. An AI system optimizing for the wrong goal (e.g., maximizing efficiency without ethical considerations) could have catastrophic consequences.
7. Privacy Violations & Mass Surveillance
AI-driven surveillance can erode privacy and civil liberties, allowing governments and corporations to track individuals in unprecedented detail. AI-powered facial recognition and predictive policing can lead to over-policing and discrimination.
8. AI Alignment Problem
Ensuring that AI systems follow human values and ethical guidelines is an ongoing challenge. A misaligned AI system could pursue harmful objectives if its goals are not carefully defined and constrained.
9. Dependence on AI & Loss of Critical Thinking
Overreliance on AI in decision-making could weaken human problem-solving skills and critical thinking. Societies may become overly dependent on AI-driven automation, leading to vulnerabilities if AI systems fail.
10. Weaponization of AI
Autonomous drones and AI-driven cyber warfare could increase the speed and scale of conflicts. AI could be used to create novel biological or chemical weapons. While AI offers significant benefits, these risks highlight the importance of robust oversight, ethical AI development, and international cooperation to mitigate potential harms.
Since the proposed cuts are being directed by Elon Musk, whose own AI development platform xAI is in heated competition with ChatGPT, ClaudeAI, and others, I felt compelled to explore Musk's collision with AI development oversight agencies to see if he is willing to destroy this organization to free his own company from any government compliance so he can speed his own company's development and generate more profit with total disregard for the ultimate expense in loss of freedom, gross exploitation, and the quality of our lives to millions of us worldwide.
As I suspected, several issues have been raised regarding Musk's AI operations:
Risk Management Practices: A study conducted by the French nonprofit SaferAI, published in October 2024, evaluated the risk management protocols of leading AI companies. xAI received the lowest possible score, indicating significant deficiencies in its risk management practices. The study highlighted concerns about the company's preparedness to handle potential AI-related risks.
AI Model Vulnerabilities: Recent research by AI security firm Adversa AI identified substantial cybersecurity vulnerabilities in xAI's Grok 3 model. These findings suggest that the model could be susceptible to exploitation, raising questions about the robustness of xAI's AI systems.
Environmental Compliance: In August 2024, environmental advocates accused xAI of operating natural gas turbines at its Memphis data center without the necessary permits. These turbines, with a combined capacity of approximately 100 MW, were reportedly emitting pollutants that could worsen local air quality. The Southern Environmental Law Center formally requested investigations by the Environmental Protection Agency and local health authorities.
ChatGPT points out:
"Cutting funding to the U.S. AI Safety Institute would directly benefit Elon Musk and his company xAI, as well as other AI developers who want to avoid regulatory scrutiny. The AI Safety Institute was established to evaluate and mitigate the risks posed by advanced AI models, including cybersecurity vulnerabilities, misinformation, and potential misuse. If this oversight is weakened, companies like xAI would face fewer regulatory hurdles and could deploy AI models with less accountability and fewer safety checks.
Given that:
xAI scored poorly in AI risk management (per the SaferAI study),
Grok 3 was found to have significant security vulnerabilities, and
Musk has a history of resisting regulations in other ventures (Tesla, SpaceX, Neuralink), his push to "save money" by defunding AI safety efforts looks less like fiscal responsibility and more like a move to eliminate oversight that could slow down or restrict his AI ambitions.
To me, one of the most dangerous threats from AI development without human oversight is described in Risk No. 6 above - "An AI system optimizing for the wrong goal (e.g., maximizing efficiency without ethical considerations) could have catastrophic consequences." Maybe Musk has already delegated his role in maximizing efficiency to his flawed xAI and we are already suffering the consequences!