FENG Ścieżka SMART is the most flexible innovation-funding instrument in Poland's 2021–2027 portfolio. The program's total budget is around EUR 7.9 billion, and a single project can reach PLN 140 million in funding in a consortium model. From an AI company's perspective — especially one building products with a research component — this is the most natural path to financing large ambition.

It is also one of the programs where the gap between "looks good" and "gets funded" is widest. Applications are long, scoring is nuanced, and the same mistakes keep returning in teams submitting for the first time. This piece is an attempt to flatten that curve.

Disclaimer Exact call dates, budget limits and scoring criteria can change. Before starting work on an application, always verify current competition documentation directly with the awarding institution (PARP or NCBR, depending on the module).

Structure of Ścieżka SMART — modules and the logic of combining them

First thing to understand: SMART is not one program, it's a construction kit. A project consists of modules — two of them are mandatory (at least one), the rest are optional.

The mandatory modules are:

  • R&D (industrial research and experimental development) — the classic research module. The project must produce new technical knowledge that enables building the innovation.
  • Innovation deployment — the module in which the output of earlier R&D (your own or acquired) is commercialized.

Optional modules include: R&D infrastructure, digitalization, greening, skills, internationalization. Each has its own rules and its own aid intensity — from 25% up to 80% of eligible costs, depending on region, company size and cost type.

For AI companies, the combination that most often works well is R&D + innovation deployment + skills. The R&D module funds development of the model or solution, deployment covers commercialization, skills — recruitment and training of the team. That's a three-year road to scaling the company, with funding often in the PLN 15–60 million range.

Who can apply and in what configurations

SMART allows three applicant types:

  1. A single enterprise — micro, small, medium or large. Every solo project has lower funding caps than the consortium variant.
  2. A consortium of enterprises — up to 3 firms. Makes sense when different firms bring different competencies, or when the project covers the whole chain from R&D to deployment.
  3. A consortium of an enterprise with a research institution — the most common configuration for projects with a high R&D component. The highest funding ceilings (up to PLN 140 million) live here.

In practice for AI companies: if the project contains a real research component (new algorithms, new architectures, dedicated models for a new domain), a consortium with a research institution is the natural path. If the project is mostly deployment of existing technologies in a new context, deployment modules without a mandatory science consortium are better.

How to build a consortium that wins

A consortium with a research institution is not a formality — it's a decision that determines the quality of the application and the odds of funding. I see three patterns that work:

Pattern 1: Integrator + R&D institution + end user

A technology company as the integrator and commercialization owner. An R&D institution (e.g., AGH Cyfronet for computational projects, SGGW for AgriTech, the Institute of Communications for ICT) as the research partner. A receiving company (a large sector player) as a co-consortium member in the role of tester and first deployment.

This pattern works particularly well for AGROSTRATEG and SMART with a sectoral component — it delivers the full chain from idea to deployment, which counts heavily in scoring.

Pattern 2: Two complementary enterprises + R&D institution

Two companies with different competencies (e.g., software + hardware, or AI + domain expertise) join forces. The R&D institution handles the research component. A pattern good for projects where no single player could build the whole thing alone.

Pattern 3: Large player + innovator + R&D institution

A large company (bank, utility, manufacturer) as lead because of scale. An innovator (AI / ML company) as the technology supplier. An R&D institution. A pattern in which the large player guarantees deployment, and the innovator brings the new technology. Common in regulated sectors.

Warning sign in a consortium If the R&D institution was picked "at the last minute" and its team has no publications or projects in the exact domain of the application, the reviewer will notice. A consortium is built over months, not two weeks before submission. The best applications come from consortia that have had earlier joint work — even small-scale.

Budget — traps and best practices

The budget is where most applications get technical cuts. A few things worth watching:

  • Cost categorization. Every category (salaries, external services, materials, depreciation, indirect costs) has its own eligibility rules. Misclassifying a cost isn't just a funding reduction — it's a quality signal about the application.
  • Personnel costs — realistic rates. The project team must have salaries consistent with market rates in the given region. Inflating rates to "use the pool" ends in rejection of specific items.
  • Indirect costs — flat rate. A flat rate is typically applied (e.g., 25% of direct costs excluding subcontracting). A simple rule, but it has to be applied correctly.
  • Consistency between modules. If R&D produces technology X and deployment concerns Y, the reviewer will ask a question. Modules must form a coherent narrative.
  • Subcontracting — limits. The R&D module has a subcontracting cap, often 10–15% of module value. A company that "outsources" most of its research shows that it has no research competence — and that lowers scoring.

Scoring — where applications gain and lose points

Reviewers work from a rubric — a set of criteria with weights. Exact criteria can change between calls, but the logic remains. These are the highest-leverage areas I see:

Innovativeness and competitive edge

The highest-scoring block. You have to show why the solution is better than existing ones and in which dimensions (functionality, cost, accessibility). General statements ("our AI is innovative") are lethal here — you need concrete benchmarks, comparisons, references to the state of the market.

Market demand and commercialization plan

The second key block. The reviewer wants to see that the product has a market, the team understands that market, and there is a real exit plan after the project ends. Best when the application includes letters of intent from prospective customers (non-binding but concrete).

Team competence

Documentation of experience for key people, publications, prior projects. A CV alone is not enough — you need references to specific projects that prove the team can execute the declared work.

Social and environmental impact

In 2026, this dimension is gaining weight. Green components, emissions reduction, impact on just transition — this is not a cosmetic add-on, it's a real source of points. AI projects can score well here in the dimension of resource-use optimization, even if they are not themselves "green."

Seven mistakes that disqualify applications

  1. Innovativeness described declaratively, without benchmarks. "Our technology is unique" is not an argument. "Our model achieves X on benchmark Y, while SOTA is Z" is.
  2. No list of prospective customers. Commercialization without a list of company names you're in conversation with is declarative. That's a red flag.
  3. Consortium chosen in a rush. An R&D institution with no publications in the project domain adds zero substantive value to the application.
  4. Outcomes defined in general terms. "We will create an AI system" is not an outcome. "We will produce a model with parameters X, Y, Z, validated on dataset A" is.
  5. Inconsistency between modules. R&D leads to technology A, deployment concerns B. This shows the team has not thought through the project narrative.
  6. Unrealistic budget. Rates too high, maintenance costs too low, no contingency reserve. The reviewer knows the market.
  7. No green or social element. In 2026, projects without a "larger-than-me" dimension lose an important set of points.

Timing — when to start if you're aiming for the next call

If the call is scheduled for June 16 – August 11, 2026, the recommended path looks like this:

  • March 2026: decision to apply, initial project construction, identification of the research partner.
  • April 2026: first draft of the project description, conversations with prospective end users, letters of intent in progress.
  • May 2026: budget, schedule, formal consortium agreements.
  • June 2026: final version of the application, internal review, submission as early as possible in the call window.

Starting work in June with August submission in mind is the road to an application that "looks like an application" but won't win. A good application takes months — and then it wins.

If you're thinking about SMART 2026 We help across the entire path — from the decision of whether the project fits this instrument, through consortium partner selection, to the final application. Write to us now — March is the last moment to start the work consciously.