<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/">
	<channel>
		<title><![CDATA[Dark C0d3rs - Adversarial ML & Research]]></title>
		<link>https://darkcoders.wiki/</link>
		<description><![CDATA[Dark C0d3rs - https://darkcoders.wiki]]></description>
		<pubDate>Sat, 09 May 2026 11:54:20 +0000</pubDate>
		<generator>MyBB</generator>
		<item>
			<title><![CDATA[[INFO] Introduction to Adversarial Machine Learning & Research Resources]]></title>
			<link>https://darkcoders.wiki/Thread-INFO-Introduction-to-Adversarial-Machine-Learning-Research-Resources</link>
			<pubDate>Mon, 21 Jul 2025 14:17:17 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://darkcoders.wiki/member.php?action=profile&uid=2">hashXploiter</a>]]></dc:creator>
			<guid isPermaLink="false">https://darkcoders.wiki/Thread-INFO-Introduction-to-Adversarial-Machine-Learning-Research-Resources</guid>
			<description><![CDATA[Welcome to the frontier where offensive security meets artificial intelligence.<br />
This thread is a living index of the core concepts, tools, research papers, and attack vectors in Adversarial Machine Learning (AML) — the art of abusing, bypassing, or hardening AI systems.<br />
<br />
<span style="font-size: x-large;" class="mycode_size"><span style="font-weight: bold;" class="mycode_b">What is Adversarial Machine Learning?</span></span><br />
<br />
AML focuses on exploiting weaknesses in machine learning models to:<ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">Fool classifiers</span> (e.g., malware labeled as benign)<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Poison training data</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Steal models or data</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Craft inputs that trigger unexpected behavior</span><br />
</li>
</ul>
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>If traditional apps have logic bugs, AI models have <span style="font-style: italic;" class="mycode_i">decision boundary bugs</span>.</blockquote>
<br />
<span style="font-size: x-large;" class="mycode_size"><span style="font-weight: bold;" class="mycode_b">Offensive Techniques</span></span><br />
<ol type="1" class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">Evasion Attacks</span> – Modify input to cause misclassification (e.g., making malware look benign).<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Model Poisoning</span> – Inject malicious data during training to corrupt future predictions.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Model Extraction</span> – Reverse engineer black-box models using API access.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Membership Inference</span> – Identify whether a data point was in the training set.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Prompt Injection (LLMs)</span> – Manipulate instructions and outputs in AI chatbots.<br />
</li>
</ol>
<br />
<span style="font-size: x-large;" class="mycode_size"><span style="font-weight: bold;" class="mycode_b">Tools &amp; Frameworks</span></span><br />
You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view.  :   NLP adversarial testing<br />
You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view. :   Evasion &amp; defense methods<br />
You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view. :  Comprehensive AML testing<br />
You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view. :  White-box and black-box attacks<br />
You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view. :  CV attacks on PyTorch/TensorFlow<br />
<br />
<span style="font-weight: bold;" class="mycode_b"><span style="font-size: x-large;" class="mycode_size">Must-Read Papers</span></span><ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">Explaining and Harnessing Adversarial Examples</span> – You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Backdooring Neural Networks</span> – You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Adversarial Examples Are Not Bugs, They Are Features</span> – You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Universal Adversarial Perturbations</span> – You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view.<br />
</li>
</ul>
<br />
<span style="font-weight: bold;" class="mycode_b"><span style="font-size: x-large;" class="mycode_size">LLM-Specific Attacks (GPT, Claude, etc.)</span></span><ul class="mycode_list"><li>Prompt Injection &amp; Jailbreaks<br />
</li>
<li>Training Data Leakage<br />
</li>
<li>Fine-Tuning Exploits<br />
</li>
<li>Prompt Leaking via Reverse Prompting<br />
</li>
</ul>
<br />
<br />
Let’s build a solid knowledge base for <span style="font-weight: bold;" class="mycode_b">adversarial AI security</span>.<br />
If you're reading a cool paper, building a model-breaking tool, or fuzzing GPT — <span style="font-weight: bold;" class="mycode_b">post it here</span>.<br />
<br />
? <span style="font-style: italic;" class="mycode_i">“Attackers think in graphs. ML models think in probabilities. We think in both.”</span>]]></description>
			<content:encoded><![CDATA[Welcome to the frontier where offensive security meets artificial intelligence.<br />
This thread is a living index of the core concepts, tools, research papers, and attack vectors in Adversarial Machine Learning (AML) — the art of abusing, bypassing, or hardening AI systems.<br />
<br />
<span style="font-size: x-large;" class="mycode_size"><span style="font-weight: bold;" class="mycode_b">What is Adversarial Machine Learning?</span></span><br />
<br />
AML focuses on exploiting weaknesses in machine learning models to:<ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">Fool classifiers</span> (e.g., malware labeled as benign)<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Poison training data</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Steal models or data</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Craft inputs that trigger unexpected behavior</span><br />
</li>
</ul>
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>If traditional apps have logic bugs, AI models have <span style="font-style: italic;" class="mycode_i">decision boundary bugs</span>.</blockquote>
<br />
<span style="font-size: x-large;" class="mycode_size"><span style="font-weight: bold;" class="mycode_b">Offensive Techniques</span></span><br />
<ol type="1" class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">Evasion Attacks</span> – Modify input to cause misclassification (e.g., making malware look benign).<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Model Poisoning</span> – Inject malicious data during training to corrupt future predictions.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Model Extraction</span> – Reverse engineer black-box models using API access.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Membership Inference</span> – Identify whether a data point was in the training set.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Prompt Injection (LLMs)</span> – Manipulate instructions and outputs in AI chatbots.<br />
</li>
</ol>
<br />
<span style="font-size: x-large;" class="mycode_size"><span style="font-weight: bold;" class="mycode_b">Tools &amp; Frameworks</span></span><br />
You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view.  :   NLP adversarial testing<br />
You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view. :   Evasion &amp; defense methods<br />
You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view. :  Comprehensive AML testing<br />
You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view. :  White-box and black-box attacks<br />
You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view. :  CV attacks on PyTorch/TensorFlow<br />
<br />
<span style="font-weight: bold;" class="mycode_b"><span style="font-size: x-large;" class="mycode_size">Must-Read Papers</span></span><ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">Explaining and Harnessing Adversarial Examples</span> – You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Backdooring Neural Networks</span> – You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Adversarial Examples Are Not Bugs, They Are Features</span> – You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Universal Adversarial Perturbations</span> – You are not allowed to view links. <a href="https://darkcoders.wiki/member.php?action=register">Register</a> or <a href="https://darkcoders.wiki/member.php?action=login">Login</a> to view.<br />
</li>
</ul>
<br />
<span style="font-weight: bold;" class="mycode_b"><span style="font-size: x-large;" class="mycode_size">LLM-Specific Attacks (GPT, Claude, etc.)</span></span><ul class="mycode_list"><li>Prompt Injection &amp; Jailbreaks<br />
</li>
<li>Training Data Leakage<br />
</li>
<li>Fine-Tuning Exploits<br />
</li>
<li>Prompt Leaking via Reverse Prompting<br />
</li>
</ul>
<br />
<br />
Let’s build a solid knowledge base for <span style="font-weight: bold;" class="mycode_b">adversarial AI security</span>.<br />
If you're reading a cool paper, building a model-breaking tool, or fuzzing GPT — <span style="font-weight: bold;" class="mycode_b">post it here</span>.<br />
<br />
? <span style="font-style: italic;" class="mycode_i">“Attackers think in graphs. ML models think in probabilities. We think in both.”</span>]]></content:encoded>
		</item>
	</channel>
</rss>