What Happens When Grant Proposals Gloss Over the Facts
January 11, 2026
Grant proposals are marketing documents that pitch what is envisioned for the future. They are also historical records of sorts, taking stock of recent projects and accomplishments. They are also profiles of an organization’s capabilities and capacities, that is, claims about what an organization has done, knows how to do, and has the resources to do again. Grant writers are not fiction writers. They are supposed to write clearly and persuasively, but they are not supposed to finesse the proposal’s language to such an extent that it becomes untrue. And yet, most grant writers have experienced being asked to “massage” unfavorable facts into preferred ones, or have seen their fact-based prose reworked by others into something that has the essence of truth but is not strictly true, or at least less transparent. To some, this may sound like business as usual and what you need to do to win a grant. However, these little acts of truth-stretching, which can take the form of exaggerations, omissions, and misrepresentations, can exact a cost.
How It Happens: The Slippery Slope from Unknowns to Half-Truths
Grant proposals have always been on the edge of fact and fiction because they are about selling a transformation that could occur if money were received and a project implemented. What is proposed is usually (if not always) different from what actually gets implemented for various reasons, including incorrect assumptions during the planning phase or unpredictable events such as natural disasters or political crises during project rollout. Because proposals have so many unknowns and are expected to position the applicant in the most favorable light, grant writers become accustomed to treating information as building blocks to be shaped and arranged to tell the strongest story through positive framing. For example, in the paragraphs below, versions A and B tell the same basic story, but version A is the preferred telling because it frames things in a positive way and provides supporting details:
A: “Since X year, we’ve completed four projects funded by X donor (see Box 1). Using lessons learned from the two projects that were delivered on time and on budget, and from the remaining projects that required no-cost extensions, we reviewed and updated our project management processes and procedures (see Attachment X). These updates include introducing a customized CMS platform designed to manage budget development and oversight, enabling real-time tracking and problem-solving.”
versus:
B: “Two of our last four projects encountered challenges and required no-cost extensions. In response, we updated our project management processes and procedures to better track project results, flag problems, and solve issues.”
The concern with factual stretching arises not from positive framing but instead with vague writing that not only lacks supporting details but also obscures facts or inflates accomplishments. Under this philosophy, any suggestion of an organization’s weaknesses or missteps must be left unsaid, leaving only stated (but undocumented) strengths. The result is language that looks similar to the examples above, but with some key differences in the details. For example, a more “massaged” statement might read:
C: “Our organization excels at project management and oversight. We deliver projects on time and on budget, providing exceptional value to donors, particularly given the size and complexity of the projects we implement. Supporting our effectiveness and systematic approach to project implementation are our recently updated, comprehensive project management protocols, which leverage the evidence-based, cutting-edge tools, enabling us to monitor all project activities and resolve complex issues in real-time.”
Is this final statement true, and is it reasonable, or is it perhaps misleading?
Based on the original fact pattern presented in Example A, 50% of recent projects were delivered on time and on budget. These successful projects may have been implemented non-consecutively or implemented several years apart, potentially undermining the notion of a consistent pattern of ongoing or established project management success. Additionally, none of the statements in this final example appears to be backed up by details. While the project management protocols may be extensive, the statement doesn’t specify when they were last updated, leaving open the possibility that they were updated recently and remain untested. Example C also states that recent changes to the project management protocol allow “all” project activities to be monitored, a bold claim that may not withstand scrutiny. A major red flag for version C is that it uses adjectives to persuade the reader that the organization is a stellar performer, rather than providing data to back up those assertions.
In comparison, Example A supports its statements by linking to the updated project management protocol and providing details on the four most recent projects, helping the reader better understand the organization’s experience and what informed the lessons learned. Example A is also transparent about challenges: while it first mentions the two projects completed on time and on budget, it doesn’t try to hide that two other projects did not meet those standards.
Perhaps the differences between A, B, and C seem subtle or irrelevant. If the donor wants more details, they’ll ask for it, the thinking may go. No harm done. What is worrisome is that if the donor doesn’t provide any feedback or ask for more details, it can reinforce the applicant’s belief that what was written was fine, that donors don’t expect precision, so the same vague text may be reused in the next proposal. If left uncorrected, over time, an organization’s boilerplate may be populated with language so general, or a project performance history so sanitized, that it could apply to many organizations and is essentially meaningless. In particular, generalized statements of success and accomplishments that lack source details make fact-checking difficult, if not impossible, when a donor (or an internal reviewer) asks for references.
Tolerance for generalized or inflated statements can grow. If you become accustomed to writing proposals that gloss over the facts, you may become less diligent about checking where the data comes from and if it has been properly validated. If the risks from writing inaccurate qualitative statements appear low and perhaps acceptable, the ethical concerns increase considerably when it comes to quantitative data.
When the Numbers Lie
A scenario that can be easier to see as problematic involves the manipulation of quantitative data. The most common situation we’ve seen is when, during the proposal phase, the grant writer or technical contributors cherry-pick data or make up baseline data.
Cherry-picking data is a common practice, such as citing only studies that support the applicant's view of the most compelling description of a problem’s scale or the proposed approach. Cherry-picking isn’t a good practice, but it can go unchallenged, depending on how well the funder’s review panel knows the programmatic area and how thoroughly they check the proposal’s citations. The risk of cherry-picking is that if the data used to design a project mischaracterizes the problem, the project design could be flawed, leading to difficulties and possible failure during implementation. Organizations that have a history of failed projects will have greater difficulty securing funding.
The second scenario, in which an applicant fabricates baseline data, poses an even greater risk. We’ve seen this take two forms. In the first, the applicant seeks to establish that funding for their proposed intervention is urgently needed, given the scale of the unmet need. But what is the unmet need? Either the applicant doesn’t know and is unsure how to find out, or they do know, but the numbers are unimpressive, so they make up better ones. These mistakes or overreaches can go unnoticed because, while proposals should include references for cited data, many do not require them; second, when funders do require data sources to be cited, they don’t always check them.
Variations of this problem of using weak or false data include (1) failing to conduct research for each new proposal and instead relying on research conducted several years ago, which is likely to be out of date; and (2) using current data but, either deliberately or through inattention, taking the data out of context when citing it in the proposal.
An example of the latter can happen when someone is not careful when interpreting data sets. For example, if a government study reports that 50% of children aged 10 to 13 read below grade level in a specific geographic area, a misrepresentation would be to report that this study says 50% of children aged 10 to 13 in the entire public school system read below grade level.
Taking data out of context is sometimes an innocent mistake, but it can be a deliberate choice on the part of the applicant in response to pressure to write the most compelling proposal possible: If the data don’t support what you want to say, you can mask this by presenting data out of context or in an ambiguous fashion.
As an example, an applicant might cite an authoritative study but subtly misrepresent its findings, counting on the fact that reviewers will see the authoritative source and — if the data seem plausible — not check the references. These obfuscations can take different forms, including citing data and providing a legitimate reference, but referencing a different or outdated resource. Or, making it even harder for reviewers to confirm the data, an applicant may cite a data source in the broadest terms (“A recent U.N. report…”) without providing a complete reference with a link to the cited material and instead inserting a partial reference like “Annual Report, 2021.”
As with the earlier example, maybe this seems like business as usual, that it’s just tweaking things to make the best case. It’s not a big deal. Every case of glossed-over or misrepresented facts is not a big deal, but if adopted or accepted as common practice, it becomes one.
The Cumulative Effects of Alternative Facts
At the proposal stage, fabricated or misrepresented data could help you make your case that your project is needed, but it complicates matters if you receive an award and attempt to execute and report on the project. The baseline data you guesstimated may make your project’s efforts look less impactful than they actually were if you overestimated the size and scope of the problem. That is, your interventions—even if they were effective and competently executed—may appear to have had little impact when the project is evaluated.
The other, more insidious problem with framing things to fit the preferred narrative in proposals is that it is harder to report truthfully on things that don’t go well or that are in conflict with what you wrote in the proposal. If you state in the proposal that all project staff are experts in their field with years of experience who can hit the ground running, it becomes problematic if, in the execution of the project, you have to report that you must pay for additional staff training because the staff are actually not as qualified as you claimed. If you present yourself as a stellar, unparalleled organization of exceptional capabilities in the proposal, you have nowhere to go but down once the grant and the realities of execution of the work begin.
It’s always better to portray your organization, its capabilities, the scope of the problem, and the strengths and weaknesses of your proposed solution as honestly as possible at the proposal stage. If you do so, you will be on firmer ground to talk candidly to the donor about issues you’ve encountered during project implementation, and reporting on project progress will be easier because you began with accurate information. So, next time a senior leader says to you, “We can’t say that,” consider pushing back when the statement represents an honest (if positively framed) description of your organization’s skills, competency levels, and past challenges. If there is a fatal flaw in the grant application, that is a problem, but the solution is not to mask the flaw with false or misleading language or data; rather, it is to address the problem or not apply for the grant. If you submit a proposal without correcting these issues and your organization is subsequently caught falsifying data or misrepresenting its capabilities or project needs, it may face long-term repercussions, including a compromised reputation and broken relationships with key funders
Most grant writers have been asked at some point to “massage” unfavorable facts into preferred ones, or have seen their fact-based prose reworked by others into something that has the essence of truth but is not strictly true, or at least less transparent. To some, this may sound like business as usual and what you need to do to win a grant. However, these little acts of truth-stretching, which can take the form of exaggerations, omissions, and misrepresentations, can exact a cost.