“It is anticipated that the majority of patent holders will actively monetize their SEPs covering standards such as 5G, Wi-Fi 6 or VVC in this fast-moving, high-investment environment. Any company adopting these standards must decrease operational risk and expense exposure by taking a proactive strategy towards SEPs rather than a reactive one.”
Standard Essential Patents (SEPs) are on the rise; the number of newly declared patents per year has almost tripled over the past five years. There were 17,623 new declared patent families in 2020, compared to 6,457 in 2015 (see Figure 1).
Figure 1: Declared patent families as to year of declaration (IPlytics Platform, 2021)
The 5G standard alone counts over 150,000 declared patents since 2015. Similarly, litigation around SEPs has increased. One of the driving factors of recent patent litigation is the shift in connectivity standards (eg, 4G/5G, Wi-Fi) that in the past were mostly used in computers, smartphones and tablets, but are now increasingly implemented in connected vehicles, smart homes, smart factories, smart energy and healthcare applications. Another reason why litigation may rise further is the belief that large SEP owners such as Huawei, ZTE or LG Electronics may soon sell parts of their SEP portfolios, which may likely end up in the hands of patent assertion entities (PAEs). One way or another, it is anticipated that the majority of patent holders will actively monetize their SEPs covering standards such as 5G, Wi-Fi 6 or VVC in this fast-moving, high-investment environment. Any company adopting these standards must decrease operational risk and expense exposure by taking a proactive strategy towards SEPs rather than a reactive one.
Many of the businesses that will adopt standards subject to SEPs have little expertise in negotiating SEP licenses. Here, understanding the overall SEP landscape is critical for smooth standard adoption, maintaining profitability and protecting the capacity to sell new products and services with enough access to third-party patent rights. However, not all declared patents are essential and not all essential patents are declared and that is why one of the major challenges when licensing, transacting, or managing Standard Essential Patents (SEPs) is that there is no public database that provides information about verified SEPs. A recently published 5G patent study uses a sample of 2,000 randomly selected 5G self-declared patents to identify the share of fully mappable patents, that is, patents where all claim elements were found in the 5G standard specification and a claim chart was made to justify that the patent is essential. Results of this study confirm that patent essentiality differs strongly across the self-declared 5G patent portfolios and ranges from 5G portfolios with an essentiality rate of only 6% to 5G portfolios with an essentiality rate of 30%.
The uncertainty regarding the essentiality of declared SEPs is an important topic in the policy debate about FRAND. Different approaches have been proposed to increase transparency about the number of truly essential patents in different patent holders’ portfolios. As assessments of the proportion and number of truly essential patents owned by different patent holders play an important role in FRAND determinations, it is crucial to understand the diversity of existing approaches, and how to properly interpret their results. In this regard policy makers have started a debate about publicly available databases of SEP claim charts to increase transparency. More specifically the European commission has recently published a Pilot study for essentiality assessment of Standard Essential Patents. In course of this study SEP essentiality checks were conducted by several stakeholder including patent offices. In the following we take a closer look at different academic approaches proposing different SEP determination methods such as essentiality checks of individual SEPs, assessments of random samples, and predictive models using semantic comparisons between patent and standard documents as well as data on technical contributions and patent citations.
Random Samples of Subject Matter Expert-Mapped SEPs
The idea of random sampling is to identify a true random sample of a larger population of data so that the random sample allows to make assumptions about overall larger data set. The method of random sampling was also applied to databases of self-declared patents that are potentially standard essential. The TCL v Ericsson case is one court-accepted example where TCL commissioned subject matter experts to conduct a study of a random sample of 2,600 ETSI declared 2G, 3G and 4G patents to determine the essentiality rate. Other studies followed similar approaches where sample of self-declared patents were used as examples to identify essentiality rates. A recently publish research study, however, identifies the limitations of sampling methods: Essentiality Rate Inflation and Random Variability in SEP Counts with Sampling and Essentiality Checking for Top-Down FRAND Royalty Rate Setting (Keith Mallinson; 2021). The quantitative analysis the author undertook for this article measures the extent of diminutions, which should be properly and fully considered before sample sizes are set, and before the short cut of sampling is adopted. While there are no set bounds for the acceptably accurate range in determinations, the author considered a reasonable proportionate accuracy requirement for essentiality rate determination to be <± 15% (i.e., a 30% range for the determined essentiality rate as a proportion of the true essentiality rate) at the 95% confidence interval level. With his opinion that true essentiality rates are more like 10% than 30% or 40%, he concludes from his analysis that samples including thousands of patents are required in top-down fair, reasonable and non-discriminatory (FRAND)-royalty rate setting for standards such as 3G, 4G or 5G. For example, if the essentiality rate is only 10%, a sample size approaching 3,000 declared-essential patents per standard, at the very least, would be required.
In 2019, a group of researchers compared up to 8 different essentiality studies on 2G, 3G and 4G declared patents to study the Disagreements among technical experts with regard to patent essentiality. The authors took into account systematic differences in essentiality probabilities across the studies, contributors and standards. This method aggregates all available information across the studies, and thus forms the single best estimate of essentiality for any given contributor’s portfolio. This “best estimate” is therefore the single best observable proxy for the unobservable beliefs held by each party to portfolio licensing negotiations.
While it is often argued that mapping all declared patents is economically not feasible a research conducted by Rudi Bekkers, Elena M. Tur, Joachim Henkel, Tommy van der Vorst, Menno Driesse and Jorge L. Contreras (2021) shows Overcoming inefficiencies of patent licensing: A method to assess a patent’s essentiality for technical standards. The research conducted reports on the technical feasibility of a system of expert assessments for patent essentiality. In course of this research patent examiners conducted SEP assessments for over 100 working days. The purpose of this study was to investigate whether essentiality assessments can be made sufficiently efficient (in terms of time and costs) as well as sufficiently accurate, to set up a large-scale system of essentiality assessment, and thus overcome important inefficiencies in the market for SEP licensing. The authors conclude that comparing the outcomes to a high-quality reference point shows that sufficiently accurate expert assessments, at a price level that allows large scale testing, are certainly technically feasible, and we identify routes to further improvement.
Semantic Comparisons of Patent Claims and Standard Sections
One reason why human SEP determination is both costly and time consuming is the complexity of the standardized technology. Standards such as 4G or 5G consist of over 1,000 so-called technical specifications (TS). These TS may have up to 600 pages and hundreds of so-called sections. To identify if a declared patent relates to a standard, experts must study and understand all patent claims and map identified claim elements against all possible standards sections. Even more, one patent may be declared to several standards documents that also must be considered when mapping the patent claims. The sheer amount of declared patents possibly essential to thousands of technical specifications calls for more automated solutions to at least support the manual claim charting of SEPs. State of the art semantic algorithms use techniques where documents are represented as vectors in term spaces, allowing comparing the actual content of a patent claim and standard section rather than the overlap of keywords. Claim language and language in standard specifications are often very different: patent claims are drafted by patent attorneys using broad terminology so that the claims apply to as many applications as possible. Standard specifications are written by technical engineers that develop the standard and use very specific language. To overcome this, semantic models are trained to understand the context of claims and standards where the algorithms can learn to recognize different expressions for certain concepts of patent claim elements. A recently published research article Truly Standard-Essential Patents? A Semantics-Based Analysis (Lorenz Brachtendorf, Fabian Gaessler, Dietmar Harhoff, 2021) introduces a semantics-based method for approximating the standard essentiality of patents. In a first empirical application, the authors illustrate the measure’s usefulness in estimating the share of true SEPs in patent portfolios for several mobile telecommunication standards. The authors find company-level differences that are statistically significant and economically substantial. Furthermore, they observe a general decline in the average share of presumably true SEPs between successive standard generations.
Comparing Random Sampling, Semantic Scores and Prediction Models to Determine Patent Essentiality
In addition to the semantic comparison of patent claims and standards sections, computer-based algorithms can extend the patent and standard data correlation by e.g. mapping the patent’s listed inventors (name, surname, affiliation) to the participation at corresponding standards meetings or by mapping the patent’s applicant/assignee’s accepted standards contributions that relate to the declared standard. Such patent characteristics can be used as features in predictions models that estimate the likelihood of a patent of being standard essential. A research article called Precision and bias in the assessment of essentiality rates in firms’ portfolios of declared SEPs (Justus Baron and Tim Pohlmann, 2021) compares the relative merits of three different approaches to the estimation of the numbers of actual SEPs held by different company 5G SEP portfolios:
- analyses of random samples,
- predictive modeling using observable patent characteristics, and a more superficial
- individual examination of every declared SEP.
For the empirical analysis, the authors use declarations of (potential) SEPs from the ETSI IPR Database classified as 5G relevant. The authors use a random of about 1,000 USPTO or EPO granted sample subject matter expert mapped and 5G declared patents. The sample data was used both for the sample analysis as well as the predictive modeling. The results shows that checks of every declared SEP and predictive modeling may achieve greater precision than simple sampling, but are susceptible to systematic bias. The authors recommend sampling for the estimation of essentiality ratios in large firm portfolios of declared SEPs; while predictive modeling is useful for the analysis of larger numbers of smaller SEP portfolios. For small portfolio sizes, a light-touch review of all declared SEPs may also be appropriate, provided that the assessment error is generally zero-centered.
While recent research sheds light on different methods on SEP determination approach, policy maker as well as the industry calls for transparency about which patents are truly essential. In this regard an upcoming virtual Research Roundtable on “Mechanisms, Governance, and Policy Impact of SEP Determination Approaches” will discuss results of recent academic research and discuss its implications for the overall policy debate of FRAND determination.
is the CEO and founder of IPlytics. He earned his doctoral degree with the highest distinction from the Berlin Institute of Technology, with a dissertation on patenting and coordination in standardisation. He then went on to work as a post-doctoral researcher and consultant for the Law and Economics of Patents Group at CERNA, MINES ParisTech.
In his work as an economist and consultant, Dr Pohlmann was confronted with the challenge that standards databases such as those of the European Telecommunications Standards Institute and the Institute of Electrical and Electronics Engineers have no real, meaningful connection with comprehensive global patent databases. He realised that if we are to keep pace with the next technology revolution, then as IP professionals, we need to rethink – even revolutionise – how we approach both patent and standards data, to provide business-ready knowledge for actionable decision making across our organisations.