Social media has become one of the most powerful tools for recruiting participants in clinical research. At Wayturn, we’ve supported over 150 research trials across North America, and advertised nearly all of them on platforms like Facebook, Instagram, and Youtube to recruit from specific, often hard-to-reach, populations.
This trend is growing rapidly for good reason: online ads can reach participants where they already are, help overcome geographic or mobility barriers, and accelerate recruitment timelines.
The Problem: Fraudulent submissions
But as clinical trial recruitment moves online, it has also opened the door to fraudulent actors—bots, scammers, and fake respondents—who now routinely target research studies for quick financial gain. What used to be isolated incidents have become widespread disruptions. In fact, some investigators have found that more than 9 out of 10 responses to their online surveys were invalid.
These bad actors use a range of tactics:
- Bots automatically fill out forms using scripts, often bypassing simple security checks.
- Scammers impersonate real people or fabricate life experiences to qualify for compensation.
- Survey farms—organized groups trained to pass eligibility checks—scan social media for public study links and flood them with submissions.
While compensation is often the main incentive, fraud isn’t just about money. Some studies appear to be deliberately targeted based on their topic or participant population. Research focusing on LGBTQ+ communities, racial health disparities, or sensitive issues like sexual trauma has experienced disproportionate fraud attempts. These responders may be politically motivated, testing AI tools, or simply exploiting vulnerable studies for content.
The damage adds up quickly. Budgets are drained by fake participants, timelines are delayed, and most critically, data quality is compromised. When fraudulent responses distort research outcomes, it risks more than just bad science—it can lead to harmful clinical recommendations, misinformed health policy, and lost trust among the very communities the research aims to support.
Recognizing that responses may be fraudulent is no easy task either – good AI, or competent survey farms, mask the common signs. It’s a constant battle of arms between detection methods and circumvention.
The Red Flags: How to Recognize Fraudulent Participants
In their 2025 article in Ethics & Behavior, Ménard, A. D. et al. documented the experiences of researchers working on three clinical and social science studies who encountered widespread fraud during online recruitment. Through interviews, they described clear red flags that appeared before, during, or after survey participation—signaling bot activity, identity fraud, or coordinated survey farm responses.
While these red flags are accurate at time of publication, it’s important to remember that fraud tactics evolve quickly. In no particular order, here are some signals found in their study:
- Researchers received dozens to hundreds of eligibility inquiries immediately after launching ads—far beyond expected interest levels.
- Emails came from suspicious addresses, often with two first names and multiple digits (e.g., “SarahChloe8472@gmail.com”).
- Many messages used identical or near-identical wording, such as “I’m interested in participating in your study.”
- Compensation was discussed in unrelated online forums, suggesting the study had been reshared to fraud-prone audiences.
These signals are easy to miss when recruitment is moving quickly. That’s why it’s crucial to have systems in place to catch them early—before they contaminate your data.
- Surveys were completed much faster than expected – much faster than a genuine response could be submitted.
- Receiving many submissions at once, often in tight clusters, or in a ‘flurry’.
- CAPTCHA and reCAPTCHA logs confirmed repeated bot detection, even if not always blocked.
- Open-text responses were repeated across surveys or sounded vague and AI-generated (“I was one of the lucky ones”).
- Implausible demographic combinations were submitted repeatedly, like multiple “Black, two-spirit, 18-year-old drag performers” from the same small region.
- Some answers were nonsensical or bizarre, such as: “High sexual frequency, can cause reproductive system overload”.
- Researchers received multiple aggressive follow-up emails asking about payment, often from the same individuals.
- Duplicate IP addresses and identical geolocations were identified across submissions, including many from the same public locations.
- Submissions were flagged as high-risk by digital fingerprinting tools like RelevantID.
- Survey farms were discovered submitting multiple identities using the same networks or devices.
- Manual reviews revealed contradictions, impossible life histories, and repeated content across entries.
- Several teams had to discard entire datasets due to overwhelming fraud.
The Solution: Smarter Recruitment, Not Less Recruitment
Social media isn’t the problem—uncontrolled scale is. Its incredible reach makes it powerful for finding participants, but also attractive to fraudsters. Instead of avoiding online ads, researchers need to manage their campaigns with care and design surveys that can resist manipulation.
Table 1 – overview of solutions
Taken from the article by Ménard, A. D. et al., and expanded upon by Wayturn with the rows in bold.
Strategy | Pros | Cons |
---|---|---|
Limit access to the survey | Very effective in preventing bots and fraudulent responders | Time intensive, alienates real participants, survey links can still spread |
Check E-mail characteristics (e.g., mismatched name/email) | Effective and fast | Should be used with other strategies; may introduce bias |
Check for limited/inaccurate content in email | Easy to identify | Same as above; less effective as AI improves |
Require demonstrated proof of eligibility (e.g., public records) | Offers definitive proof | Not feasible for many studies |
Highly targeted ads to reduce general access. | Effective | Requires ad agency. Might be slow or expensive depending on audience size. |
Focus copywriting on benefits other than compensation | Effective, removes motivation to use bots. | Requires copywriter, or time. Compensation might be important to real participants too. |
Add phone verification in prescreening stage | Clear barrier to bots and survey farms. | Deters genuine participants too. Can be difficult to implement. Cost per number. |
Enabling software platform safeguards (CAPTCHA , IP geolocation checks, time to completion, etc) | Easy to implement, minimal technical skill needed | Not fully effective, best used in combination |
Including trick questions | Easy to implement | Ineffective at identifying bots/fraud |
Including hidden “honeypot” questions | Technically simple | Requires more skill; mostly ineffective |
Scrutinizing response characteristics | Technically simple | Time intensive; may introduce bias |
Evaluating open-ended questions | Useful in combination with other strategies | Labor intensive; may be biased; less effective with AI-generated responses |
“Holistic” approaches (combining multiple strategies) | Most effective overall | Time intensive; potential for bias |
The solution isn’t to avoid social media—it’s to use it more carefully and with more stringent screening methods.
Automation, passive detection, and strategic ad design are now essential components of responsible recruitment.
1. Manage Ad Copy and Visibility
One of the strongest predictors of fraud is how compensation is advertised. Studies that highlight payment in the ad headline or preview text are far more likely to be reposted to fraud-prone forums.
To mitigate this:
- Avoid stating specific compensation amounts or formats in the ad itself.
- Use language focused on the study’s purpose or participant relevance.
- Limit ad visibility using geography, age, and interest-based targeting.
In theory — targeted social media ads may actually be safer than other mass recruitment methods, especially those that post study links publicly (e.g., on university websites or general research portals). Open listings can be easily scraped by bots and reposted to forums where survey farms operate.
By contrast, ad platforms allow you to narrow your audience to those more likely to actually meet study criteria—such as people who follow certain health communities, live in a specific region, or are in the right age group. This not only improves recruitment efficiency, but also reduces exposure to fraudulent actors who are less likely to be part of these subgroups.
2. Build In Automatic Fraud Detection
Manually checking all submissions is too time consuming. So once participants reach the survey, researchers should rely on automated tools and passive checks to detect suspicious patterns. These include:
- Time-to-complete benchmarks (e.g., flagging surveys done in <5 minutes).
- IP duplication checks and geolocation mismatches.
- Detection of repeated, or incoherent open-ended responses.
- Use of honeypot questions (hidden fields bots may fill in). Although the efficiency of this is decreasing as bots get more advanced.
- CAPTCHA and fingerprinting tools.
These methods operate silently and don’t deter genuine participants, but they surface patterns that help you screen responses efficiently.
3. Add Phone Verification
If the risk of fraud is high, the next level is to require a verified phone number before the prescreening survey. This does slightly deter genuine participants, so it’s a balancing act and should perhaps be implemented only if the study has a problem with fraudulent responses, or high compensation. The steps are:
- Collect numbers through the screener or survey platform.
- Validate using systems that reject VoIP or reused numbers (similar to Google’s verification process). Twilio is an example of a service that can do this. Wayturn’s surveys can also use it.
- Ensure the number hasn’t been used in the same study previously.
- Send a verification code to the number, and require the participant to enter it before they can continue.
This step won’t eliminate fraud entirely, but it raises the effort required —beyond what many bots and survey farms are set up to handle. There are services online to buy phone numbers, and receive text messages from other countries, but many international companies (like Google and ChatGPT) rely on this method to ensure users are more likely to be human.
4. Final Layer: Manual Screening
Even after all these safeguards, some fraudsters will pass automated checks. That’s why the final step involves human judgment:
- Review for coherent long-form responses.
- Confirm logical and consistent data across the survey.
- Reach out to participants by phone to confirm eligibility and explain the study.
While manual verification is time-consuming, it can be manageable by having it as a last step after screening the most clear cases of fraud automatically.
In conclusion
Social media recruitment isn’t broken—it just needs structure. With the right strategy, tools, and layered safeguards, researchers can still reach the right participants and protect the quality of their data.
Wayturn routinely implements these strategies to maximize the number of enrollments we send to our clients – minimizing time spent fighting bots.