What Happens When Grant Proposals Gloss Over the Facts
January 11, 2026
Grant proposals are marketing documents that pitch what is envisioned for the future. They are also historical records of sorts, taking stock of recent projects and accomplishments. They are also profiles of an organization’s capabilities and capacities, or claims about what an organization has done, knows how to do, and has the resources to do again.
Grant writers are not fiction writers. They are supposed to write clearly and persuasively, but they are not supposed to finesse the proposal’s language to such an extent that it becomes untrue. And yet, most grant writers have experienced being asked to “massage” unfavorable facts into preferred ones, or seen their fact-based prose reworked by others into something that has the essence of truth but is not strictly true, or is at least less transparent.
To some, this may sound like business as usual and what you need to do to win a grant. However, these little acts of truth-stretching, which can take the form of exaggerations, omissions, and misrepresentations, can exact a cost.
How It Happens: The Slippery Slope from Unknowns to Half-Truths
Grant proposals have always been on the boundary between fact and fiction because they are about selling a transformation that could occur if funding were received. What is proposed is almost always different from what is implemented for several reasons, from incorrect assumptions during the planning phase to unpredictable events—such as natural disasters or political crises—during the project’s rollout.
Between proposals based on assumptions and applicants positioning themselves in the most favorable light, grant writers become accustomed to treating information as building blocks to be shaped and positively framed. For example, in the paragraphs below, versions A and B tell the same basic story, but version A is the preferred telling because it frames things positively and provides supporting details:
A: “Since X year, we’ve completed four projects funded by X donor (see Box 1). Using lessons learned from the two projects that were delivered on time and on budget, and from the remaining projects that required no-cost extensions, we reviewed and updated our project management processes and procedures (see Attachment X). These updates include introducing a customized CMS platform designed to manage budget development and oversight, enabling real-time tracking and problem-solving.”
versus:
B: “Two of our last four projects encountered challenges and required no-cost extensions. In response, we updated our project management processes and procedures to better track project results, flag problems, and solve issues.”
The concern with factual stretching arises less from positive framing than from vague writing that not only lacks supporting details but also obscures facts or inflates accomplishments. This approach calls for any suggestion of an organization’s weaknesses or missteps to be left unsaid. The result is language that resembles the examples above but differs in key details. For example, a more “massaged” statement might read:
C: “Our organization excels at project management and oversight. We deliver projects on time and on budget, providing exceptional value to donors, particularly given the size and complexity of the projects we implement. Supporting our effectiveness and systematic approach to project implementation are our recently updated, comprehensive project management protocols, which leverage evidence-based, cutting-edge tools, enabling us to monitor all project activities and resolve complex issues in real-time.”
Is this last statement true? And is it reasonable, or is it perhaps misleading?
Based on the original fact pattern presented in Example A, 50% of recent projects were delivered on time and on budget. These successful projects may have been implemented non-consecutively or implemented several years apart, potentially undermining the notion of a consistent pattern of ongoing or established project management success.
Additionally, none of the statements in this final example appear to be supported by details. While the project management protocols may be extensive, the statement doesn’t specify when they were last updated, leaving open the possibility that they were updated recently and remain untested. Example C also states that recent changes to the project management protocol allow “all” project activities to be monitored, a bold claim that may not withstand scrutiny. A major red flag for version C is that it relies on adjectives to persuade readers that the organization is a stellar performer, rather than providing data to substantiate those assertions.
In comparison, Example A supports its statements by linking to the updated project management protocol and providing details on the four most recent projects, helping the reader better understand the organization’s experience and the lessons learned. Example A is also transparent about the challenges: while it first mentions the two projects completed on time and on budget, it doesn’t conceal that two other projects did not meet those standards.
Perhaps the differences between A, B, and C seem subtle or irrelevant. If the donor wants more details, they’ll ask for them, the thinking goes. No harm done. What is worrisome is that if the donor doesn’t provide any feedback or ask for more details, it can reinforce the applicant’s belief that what was written was fine, that donors don’t expect precision, leading to the same vague text being reused in the next proposal. If left uncorrected, an organization’s boilerplate may, over time, be populated with language so general—or a project performance history so sanitized— that it could apply to many organizations and is essentially meaningless. Generalized statements of success that lack supporting details also make fact-checking difficult, if not impossible, when a donor or an internal reviewer asks for references.
Tolerance for generalized or inflated statements can grow. If you become accustomed to writing proposals that gloss over the facts, you may become less diligent about verifying where the supporting data comes from and if it has been properly validated. If the risks of writing inaccurate qualitative statements appear low and even acceptable as part of normal “proposal spin,” this is usually not the case for quantitative data, where falsified and unvalidated data can raise serious ethical concerns.
When the Numbers Lie
A scenario that can be easier to see as problematic involves the manipulation of quantitative data. The most common situations we’ve seen involve grant writers or technical contributors cherry-picking data or fabricating baseline data during the proposal phase.
Cherry-picking data is a common practice. It includes citing only those studies that support the most compelling description of a problem’s scale or the proposed approach. Cherry-picking isn’t a good practice, but it can go unchallenged, depending on how well the funder’s review panel knows the programmatic area and how thoroughly they check the proposal’s citations. The risk of cherry-picking is that if the data used to design a project mischaracterizes the problem, the project design could be flawed, leading to difficulties and possible failure during implementation. Organizations that have a history of failed projects will have greater difficulty securing funding.
The second scenario, in which an applicant fabricates baseline data, poses an even greater risk. We’ve seen this take two forms: (1) The applicant seeks to establish that funding for their proposed intervention is urgently needed, given the scale of the unmet need. But what is the unmet need? The applicant doesn’t know how to find out, so they insert their best guess, or (2) The applicants do have reliable data showing the unmet need, but the numbers are unimpressive, so they fabricate more favorable ones. These mistakes or overreaches can go unnoticed because, while proposals should include references, many donors do not require supporting data; second, when funders do require data sources to be cited, they don’t always check them.
Variations of this problem of using weak or false data include failing to conduct research for each new proposal and instead relying on research conducted several years ago, which is likely to be out of date; and using current data but, either deliberately or through inattention, taking the data out of context when citing it in the proposal.
An example of the latter occurs when someone is not careful in interpreting datasets. For example, if a government study reports that 50% of children aged 10 to 13 read below grade level in a specific geographic area, a misrepresentation would be to report that this study says 50% of children aged 10 to 13 in the entire public school system read below grade level.
Taking data out of context is sometimes an innocent mistake. In other cases, it can be a deliberate choice in response to pressure to write the most compelling proposal possible: If the data don’t support what you want to say, you can mask this by presenting data out of context or ambiguously.
An applicant might cite an authoritative study but subtly misrepresent its findings, relying on reviewers to accept the source as authoritative and, if the data seem plausible, not checking the references. These obfuscations can take different forms. The first is to cite data and provide a legitimate reference, but referencing a different or outdated resource. A second practice that makes it harder for reviewers to verify data is citing a data source in the broadest terms (“A recent U.N. report…”) without providing a complete reference. Instead, the proposal may use vague language, such as “Annual Report, 2021.”
As with the earlier example on qualitative data, maybe this seems like business as usual, that it’s just tweaking things to make the best case. While not every case of glossed-over or misrepresented facts is a big deal, if adopted as common practice, it becomes one.
The Cumulative Effects of Alternative Facts
At the proposal stage, fabricated or misrepresented data could help make the case for a project, but it complicates matters if you receive an award and attempt to execute and report on the project. If you overestimated a problem's size and scope, guesstimated baseline data may make your project appear less successful than it actually was.
Another problem with framing things to fit a preferred narrative is that it is harder to report truthfully when things don’t go well or conflict with what you wrote in the proposal. If you state in the proposal that all project staff are experts in their field with years of experience who can hit the ground running, it becomes problematic if, in the execution of the project, you have to report that you must pay for additional staff training because the staff are actually not as qualified as you claimed. If you present yourself as a stellar, unparalleled organization with exceptional capabilities in your proposal, you have nowhere to go but down once the grant and the realities of executing the work begin to unfold.
It’s always better to portray your organization, its capabilities, the scope of the problem, and the strengths and weaknesses of your proposed solution as honestly as possible at the proposal stage. If you do, you will be in a better position to talk candidly with the donor about issues you’ve encountered during project implementation. Reporting on project progress will be easier because you began with accurate information. So, next time a senior leader says to you, “We can’t say that,” consider pushing back if what you’ve written represents an honest description of your organization’s skills, competency levels, and past challenges.
If there is a fatal flaw in the grant application, that is a problem, but the solution is not to mask the flaw with false or misleading language or data; rather, it is to address the problem or not apply for the grant. If you submit a proposal without correcting these issues and your organization is subsequently found to have falsified data or misrepresented its capabilities or project needs, it may face long-term repercussions, including a compromised reputation and strained relationships with key funders.
The traditional grant writer’s toolkit used to consist of a core set of tools: Microsoft Word, Excel, and PowerPoint, and Adobe Pro. Things have changed, and nonprofits and grant writers have many options for writing, sharing, designing, and collaborating beyond Microsoft and Adobe. We’ve profiled several options below, all produced by companies based outside the United States.