<?xml version="1.0" encoding="UTF-8"?>
<resource xmlns="http://datacite.org/schema/kernel-4" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.5/metadata.xsd">
  <identifier identifierType="DOI">10.7910/DVN/F7SZMG</identifier>
  <creators>
    <creator>
      <creatorName nameType="Personal">Baker, Maher Asaad</creatorName>
      <givenName>Maher Asaad</givenName>
      <familyName>Baker</familyName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="https://orcid.org">https://orcid.org/0000-0001-8013-6044</nameIdentifier>
      <affiliation>SOLAV</affiliation>
    </creator>
  </creators>
  <titles>
    <title>The Spread of AI-Generated Misinformation</title>
  </titles>
  <publisher>Harvard Dataverse</publisher>
  <publicationYear>2024</publicationYear>
  <subjects>
    <subject>Computer and Information Science</subject>
  </subjects>
  <contributors>
    <contributor contributorType="ContactPerson">
      <contributorName nameType="Personal">Baker, Maher Asaad</contributorName>
      <givenName>Maher Asaad</givenName>
      <familyName>Baker</familyName>
      <affiliation>SOLAV</affiliation>
    </contributor>
  </contributors>
  <dates>
    <date dateType="Submitted">2024-03-29</date>
    <date dateType="Available">2024-03-29</date>
  </dates>
  <resourceType resourceTypeGeneral="Dataset"/>
  <relatedIdentifiers>
    <relatedIdentifier relationType="IsSupplementTo" relatedIdentifierType="DOI">10.5281/ZENODO.10390423</relatedIdentifier>
  </relatedIdentifiers>
  <sizes>
    <size>737287</size>
  </sizes>
  <formats>
    <format>application/pdf</format>
  </formats>
  <version>1.0</version>
  <rightsList>
    <rights rightsURI="info:eu-repo/semantics/openAccess"/>
    <rights rightsURI="http://creativecommons.org/publicdomain/zero/1.0" rightsIdentifier="CC0-1.0" rightsIdentifierScheme="SPDX" schemeURI="https://spdx.org/licenses/" xml:lang="en">Creative Commons CC0 1.0 Universal Public Domain Dedication.</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">In today&amp;apos;s digital world, information flows freely and endlessly across borders and platforms. While this connectivity has empowered the spread of ideas, it has also enabled the propagation of misleading claims that can erode truth and trust. A new threat has emerged from the rising tide of artificial intelligence, as generative models allow the industrial-scale production of synthetic yet sophisticated content. Like a flood, AI-generated misinformation and disinformation threaten to drown the foundations of an informed society under a deluge of falsity. We must acknowledge that this flood was not unforeseen, but the inevitable result of careless development and deployment without adequate responsibility or wisdom. The creators of these powerful tools failed to consider their darker applications and the need to establish guardrails against abuse. Now the waters rage largely unchecked, with platforms struggling to stem the current and individuals left floundering in murky confusion. While regulation and moderation have roles, the solution lies deeper - in taking ownership of our technological progress and prioritizing integrity over expediency. The tide will not recede through reaction alone but requires a turn toward proactive responsibility. Developers must recognize their ethical duty to envision misuse and implement safeguards that do not compromise functionality but uphold reliability. Platforms must find transparency and educate users to bolster critical thinking against manipulation. Individuals too have a part, in media literacy and willingness to reconsider preconceptions in the face of contradictory evidence. We must shore up our foundations with truth and wisdom before the coming storm. This thesis aims to survey the rising floodwaters and assess our defenses. It will define the concepts of misinformation and disinformation, differentiating falsehood by intent. It will examine how generative AI enables the mass production of synthetic content, from deepfakes to persuasive text, and the realistic yet misleading material this facilitates. Detection methods will be explored, considering techniques from fact-checking to provenance analysis. Platform policies and individual responsibilities will also be evaluated. Through rigorous analysis this work seeks to bring clarity and responsibility to the issue. It aims to neither deny technological progress nor dismiss risk, but navigate a balanced path between the two. Ultimately it calls us to rise above reaction and embrace foresight, prioritizing integrity over convenience in both creation and consumption of information. The tide of falsity grows swiftly, but with wisdom and courage, we can establish bulwarks of truth to withstand the coming flood. By understanding the challenges and cooperating constructively, our society can emerge from this trial with foundations reinforced rather than eroded. The time to prepare is now, before the deluge.</description>
  </descriptions>
</resource>
