Documentation
API: Triage
Diagnosis

Diagnosis

Overview

The /diagnosis endpoint is the core of the Infermedica API. Using a patient’s sex, age, and medical evidence (including symptoms, risk factors and laboratory test results), it suggests possible causes and generates questions to drive an interview similar to one a doctor would have with a patient.

⚠️

The /diagnosis endpoint in the API is not a substitute for independent medical advice nor does it offer a diagnosis per se. The information presented by Infermedica’s API should not be considered medical advice. The API is not intended or designed to be used to diagnose, treat, cure, or prevent any disease. The information that Infermedica’s API provides should not be used for therapeutic purposes or as a substitute for a health professional’s advice.

Stateless API

The Infermedica API is stateless. Since the API does not track the state or progress of interviews, each request to /diagnosis must contain all the information gathered to that point about a given case. You can’t send only the answer to the most recent question returned from /diagnosis; your application must store sex, age, initial evidence, and all previous answers, and resend them each time, along with the most recent answer.

Interview flow

To carry out a symptom assessment interview with a patient, you will need multiple calls to /diagnosis. Before the first request, the patient’s sex, age, and initial evidence must be collected (e.g. the patient's chief complaint and relevant risk factors). The response to the first request will contain an interview question that should be presented to the patient. The patient's answer should then be added to the list of already collected evidence. The process should continue in the following manner:

  • send a request to /diagnosis with the updated evidence list,
  • ask the patient the question returned from /diagnosis,
  • add the patient's answer to the existing evidence list,
  • repeat the steps.

This process can continue for as long as necessary. The should_stop attribute suggests when the interview should be considered finished. Our engine takes into account several factors when determining this, including the interview length and the confidence it has in the rankings. It's possible to continue the interview beyond this point if needed, but we highly recommend honoring this attribute for most use-cases. In general, the number of questions answered and the probability of the top conditions in the rankings should be considered when deciding when to stop the interview.

Request

The /diagnosis endpoint responds to POST requests containing a JSON object that describes a single medical case, e.g.

cURL
curl "https://api.infermedica.com/v3/diagnosis" \
  -X "POST" \
  -H "App-Id: XXXXXXXX" \
  -H "App-Key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" \
  -H "Interview-Id: d083e76f-3c29-44aa-8893-587f3691c0c5" \
  -H "Content-Type: application/json" \
  -d '{
    "sex": "female",
    "age": {
      "value": 25
    },
    "evidence": [
      {"id": "s_47", "choice_id": "present", "source": "initial"},
      {"id": "s_22", "choice_id": "present", "source": "initial"},
      {"id": "p_81", "choice_id": "absent"}
    ]
  }'

Sex and age

The sex and age attributes are two required elements of every request to /diagnosis. Under the hood, sex and age are used to automatically instantiate corresponding risk factors that may alter the base prevalence of medical conditions in Infermedica's model.

The sex attribute indicates the patient's biological sex and can have one of two possible values: female or male.

The age is composed of two attributes:

  • value – a positive integer value between 0 and 130; this attribute is required,
  • unit – one of year or month; this attribute is optional and the default value is year.
JSON
"sex": "female",
"age": {
    "value": 11,
    "unit": "month"
}

Omitting sex or age or providing invalid values will yield a 400 Bad Request error.

Evidence

The evidence list is the most important part of each request to /diagnosis. While evidence is technically an optional attribute, to receive a non-empty response there must be at least one present symptom or laboratory test result added to your evidence list. Please note that sending only risk factors or only absent symptoms might not be sufficient to start the interview.

Each piece of evidence should be sent as a simple JSON object with two required attributes: id and choice_id. Optionally, a source attribute can also be added (see corresponding section below).

JSON
"evidence": [
  {"id": "s_98", "choice_id": "present"}
]

The id attribute indicates an observed symptom or risk factor.

The choice_id attribute represents the state of given evidence and can have one of 3 values: present, absent or unknown. Please note that absent and unknown cannot be used interchangeably, as their mathematical meanings are different.

Omitting id or choice_id or providing invalid values will yield a 400 Bad Request error .

Indicating evidence source

Another key important attribute of evidence is called source. It allows the user to mark the exact stage of the interview that the given evidence was sent at. Thanks to this, the engine can provide more relevant interviews and, as a result, a better final list of most probable conditions as well as improved triage recommendations.

We highly recommend adding the source attribute if any of the following applies to your case:

  • any symptoms are reported by user as initial evidence (see: Gathering initial evidence),
  • any questions about symptoms or risk factors are prompted or predefined (see: Common risk factors),
  • /suggest endpoint is used,
  • /red_flags endpoint used.

This attribute takes one of the following values:

  • "source": "initial" – evidence reported by user,
  • "source": "suggest" – evidence from /suggest endpoint,
  • "source": "predefined" – evidence predefined separately from the actual interview, should be applied to all custom evidence (not calculated in /diagnosis or /suggest),
  • "source": "red_flags" – evidence from /suggest with red_flags method or /red_flags endpoint,
  • no value – for evidence gathered during the interview, the source attribute should be entirely omitted.

For example:

JSON
{
  "sex": "female",
  "age": {
    "value": 45
  },
  "evidence": [
    {
      "id": "s_21",
      "choice_id": "present",
      "source": "initial"
    },
    {
      "id": "s_156",
      "choice_id": "present",
      "source": "suggest"
    },
    {
      "id": "p_264",
      "choice_id": "present",
      "source": "predefined"
    },
    {
      "id": "p_13",
      "choice_id": "present",
      "source": "predefined"
    },
    {
      "id": "s_1193",
      "choice_id": "present"
    }
  ]
}
⚠️

An evidence source is required for every piece of evidence that was not inferred in /diagnosis, otherwise the quality of the interview could be strongly decreased.

Gathering initial evidence

Interviews are most effective when they are started with some meaningful initial evidence. The search space of available symptoms is very wide, so the statistical inference engine needs a place to start. Due to this, you should aim for at least 2-3 initial present symptoms. Adding additional symptoms, as well as absent ones, and risk factors is also helpful.

There are many ways to gather initial evidence:

  • using the /search endpoint to implement autocomplete widgets that let users enter and select their observations,
  • using the /parse endpoint to analyze free-text (either pre-existing, like a patient record, or entered by the user, like a chat message) and extract mentions of observations from it,
  • building a predefined list of common or particularly relevant observations for users to choose from,
  • building a human body avatar with each body part mapped to the predefined list of observations for the user to select,
  • using the /suggest endpoint to find observations that have often been selected by other users with similar health problems,
  • allowing users to enter their laboratory test results.

Based on our experience from numerous deployments, it is both important and challenging to design this initial step in a way that will encourage users to provide enough data to begin the interview.

Indicating initial evidence

The initial evidence, i.e. evidence reported by the user before the start of the interview, should be marked as "source": "initial", e.g.

JSON
"evidence": [
  {"id": "s_98", "choice_id": "present", "source": "initial"}
]

There are two consequences of marking evidence as initial:

  • The inference engine can better understand the progress of the interview, which supports the stop recommendation feature
  • Conditions unrelated to the initial evidence are only included in the conditions ranking when their probability is sufficiently high. This makes the interview results more focused on the chief complaints.

In most cases, the initial evidence reported by a patient is related to the conditions in the rankings. However, we've noticed that some users provide initial evidence but later deny all of its related symptoms, causing the engine to broaden its search space and return unrelated conditions. Designating evidence as "initial" prevents this.

Although there are use cases where it is impossible to use the "source": "initial" attribute, and it is therefore optional, we highly recommend using this attribute whenever possible, especially if you are relying on stop recommendations.

Common risk factors

In our medical knowledge base, risk factors can be chronic conditions (e.g. diabetes), lifestyle habits (e.g. smoking), geographical locations (e.g. South America) or events (e.g. a head injury or insect bite). Risk factors alone are not sufficient information to start an interview, but their presence may greatly impact the base prevalence of various conditions. For example, if a patient reports a high fever and chills, it's most likely the flu. However, if the same symptoms are present but we know that the patient has recently returned from some exotic location, the engine will broaden its search towards infectious tropical diseases. Similarly, when a patient reports headache it is important to know if they have recently suffered an injury or trauma.

Although /diagnosis may return questions about risk factors, when implementing a symptom checker we recommend asking the patient about common risk factors before the actual interview begins. This helps to steer the interview in the right direction and to reduce its length.

One way to gather initial risk factors is to use /suggest in relevant risk factors mode. However, some risk factors are not handled by the /suggest algorithm, notably those related to geographical location (either residence in or recent travel to):

  • p_13 – United States or Canada
  • p_14 – Central or South America
  • p_15 – Europe
  • p_16 – Northern Africa
  • p_17 – Central Africa
  • p_18 – Southern Africa
  • p_19 – Australia or Oceania
  • p_20 – Russia, Kazakhstan, or Mongolia
  • p_21 – Middle East
  • p_236 – Asia excluding Middle East, Russia, Kazakhstan and Mongolia

That’s why users are encouraged to gather additional pre-interview risk factors in their applications. Two good reasons for this are:

  • geographical risk factors (which can substantially improve the quality of an interview and its results) can only ever be gathered this way,
  • in order to ensure some crucial risk factors are always gathered, no matter how the interview goes, e.g. risk factors concerning a patient’s cholesterol.

Every piece of evidence which is not gathered with API methods (e.g. /search, /parse, /suggest, or /diagnosis), should be marked with "source": "predefined", e.g.

JSON
"evidence": [
  {"id": "p_13", "choice_id": "present", "source": "predefined"},
  {"id": "p_7", "choice_id": "absent", "source": "predefined"},
  {"id": "p_9", "choice_id": "absent", "source": "predefined"},
  {"id": "p_28", "choice_id": "present", "source": "predefined"}
]

There are a few groups of common risk factors:

  • risk factors related to patient demographics and history:
    • p_7 – High BMI
    • p_9 – Hypertension
    • p_10 – High cholesterol
    • p_28 – Smoking cigarettes
  • risk factors related to geographical location (either residence in or recent travel to):
    • p_13 – United States or Canada
    • p_14 – Central or South America
    • p_15 – Europe
    • p_16 – Northern Africa
    • p_17 – Central Africa
    • p_18 – Southern Africa
    • p_19 – Australia or Oceania
    • p_20 – Russia, Kazakhstan, or Mongolia
    • p_21 – Middle East
    • p_236 – Asia excluding Middle East, Russia, Kazakhstan, and Mongolia
  • risk factors related to physical injuries and traumas:
    • p_264 – Recent physical injury (marking it absent will also exclude other questions about injury related risk factors listed below)
    • p_144 – Abdominal trauma
    • p_145 – Acceleration-deceleration injury
    • p_146 – Back injury
    • p_232 – Recent head injury
    • p_136 – Chest injury
    • p_53 – Limb injury
  • other risk factors, dependent on age or sex:
    • p_42 – Pregnancy
    • p_11 – Postmenopause

Weight and height

The previous version of the Infermedica API allowed weight and height to be sent along with sex and age. This is no longer supported, but weight-related risk factors are available in our default model and can be included as evidence instead. There are two such risk factors:

  • p_6 – Low BMI
  • p_7 – High BMI

When a patient's weight and height are available, you can compute their BMI (opens in a new tab) in your application and add the appropriate risk factor as present to the evidence list in a /diagnosis call, e.g.

JSON
{
  "sex": "male",
  "age": {
    "value": 45
  },
  "evidence": [
    {"id": "p_7", "choice_id": "present"}
  ]
}

When the patient’s BMI falls within a healthy range (between 19 and 30), you may include both of the above risk factors as absent. Otherwise /diagnosis may return a question about BMI when such information would be relevant in the symptom assessment process.

Request extras

The extras attribute may contain additional options that control various aspects of the endpoint. Please refer to the section below for a detailed explanation.

Response

The response contains 5 sections:

  • question – next interview question to ask the patient
  • conditions – ranking of possible medical conditions
  • should_stop – signals when to stop the interview (optional - this attribute will be returned only if initial evidence is indicated)
  • has_emergency_evidence – indicates if reported evidence appears serious and the patient should go to an emergency department
  • extras – usually empty, may contain additional or experimental attributes.
JSON
{
  "question": {
    "type": "single",
    "text": "Does the pain increase when you touch or press on the area around your ear?",
    "items": [
      {
        "id": "s_476",
        "name": "Pain increases when touching ear area",
        "choices": [
          {
            "id": "present",
            "label": "Yes"
          },
          {
            "id": "absent",
            "label": "No"
          },
          {
            "id": "unknown",
            "label": "Don't know"
          }
        ]
      }
    ],
    "extras": {}
  },
  "conditions": [
    {
      "id": "c_131",
      "name": "Otitis externa",
      "common_name": "Otitis externa",
      "probability": 0.1654
    },
    {
      "id": "c_808",
      "name": "Earwax blockage",
      "common_name": "Earwax blockage",
      "probability": 0.1113
    },
    {
      "id": "c_121",
      "name": "Acute viral tonsillopharyngitis",
      "common_name": "Acute viral tonsillopharyngitis",
      "probability": 0.0648
    },
    ...
  ],
  "should_stop": false,
  "extras": {}
}

Question

The question attribute represents an interview question that can be presented to the user.

The question attribute can also have a null value. This means that either no present symptom has been provided as initial evidence or, in the rare case of an extremely long interview, that there are no more questions to be asked.

Question types

There are three main types of questions enabled by default, with each requiring slightly different handling. A fourth can be enabled via the enable_symptom_duration option.

single

The single type represents simple Yes/No/Don't know questions, e.g.

JSON
"question": {
  "type": "single",
  "text": "Does the pain increase when you touch or press on the area around your ear?",
  "items": [
    {
      "id": "s_476",
      "name": "Pain increases when touching ear area",
      "choices": [
        {
          "id": "present",
          "label": "Yes"
        },
        {
          "id": "absent",
          "label": "No"
        },
        {
          "id": "unknown",
          "label": "Don't know"
        }
      ]
    }
  ],
  "extras": {}
}

When the user answers a question of the single type, exactly one object with the id of the item and selected choice_id should be added to the evidence list of the next request, e.g.

JSON
"evidence": [
  ...
  {"id": "s_476", "choice_id": "present"}
]
group_single

The group_single type represents questions about a group of related but mutually exclusive symptoms, of which the patient should choose exactly one, e.g.

JSON
"question": {
  "type": "group_single",
  "text": "What is your body temperature?",
  "items": [
    {
      "id": "s_99",
      "name": "Between 99.5 and 101 °F (37 and 38 °C)",
      "choices": [
        {
          "id": "present",
          "label": "Yes"
        },
        {
          "id": "absent",
          "label": "No"
        },
        {
          "id": "unknown",
          "label": "Don't know"
        }
      ]
    },
    {
      "id": "s_100",
      "name": "Above 101 °F (38 °C)",
      "choices": [
        {
          "id": "present",
          "label": "Yes"
        },
        {
          "id": "absent",
          "label": "No"
        },
        {
          "id": "unknown",
          "label": "Don't know"
        }
      ]
    }
  ],
  "extras": {}
}
⚠️

For a question of the group_single type, exactly one object with the id of the selected item and choice_id set to present should be added to the evidence list of the next request, with all other items omitted, e.g.

JSON
"evidence": [
  ...
  {"id": "s_99", "choice_id": "present"}
]
group_multiple

The group_multiple type represents questions about a group of related symptoms where any number of them can be selected, e.g.

JSON
"question": {
  "type": "group_multiple",
  "text": "How would you describe your headache?",
  "items": [
    {
      "id": "s_25",
      "name": "Pulsing or throbbing",
      "choices": [
        {
          "id": "present",
          "label": "Yes"
        },
        {
          "id": "absent",
          "label": "No"
        },
        {
          "id": "unknown",
          "label": "Don't know"
        }
      ]
    },
    {
      "id": "s_604",
      "name": "Feels like \"stabbing\" or \"drilling\"",
      "choices": [
        {
          "id": "present",
          "label": "Yes"
        },
        {
          "id": "absent",
          "label": "No"
        },
        {
          "id": "unknown",
          "label": "Don't know"
        }
      ]
    },
    {
      "id": "s_23",
      "name": "Feels like pressure around my head",
      "choices": [
        {
          "id": "present",
          "label": "Yes"
        },
        {
          "id": "absent",
          "label": "No"
        },
        {
          "id": "unknown",
          "label": "Don't know"
        }
      ]
    }
  ],
  "extras": {}
}
⚠️

An object should be added to the evidence list of the next request for each item of a group_multiple question. Any available choice_id is allowed. Omitting any item may cause the same question to be returned by the API again.

Question extras

Questions can be configured by using extra request options. For instance, it's possible to disable sensitive content or question groups, add additional explanations, or change to third-person mode. Selecting a different interview mode can change which questions are asked during the interview.

Conditions

Each response contains a conditions attribute holding a list of possible conditions sorted by their estimated probability.

Each condition in the rankings is represented by a JSON object with the following attributes: id, name, name_common and probability.

While name and common_name attributes are returned for convenience, any additional information about a given condition can be retrieved from the /conditions/{id} endpoint using the id attribute. Alternatively, you can pass include_condition_details in extras to include condition details in the /diagnosis response.

The probability attribute is a floating point number between 0 and 1 indicating a match between reported evidence and conditions in the model.

Please note that the condition rankings may be empty [] if there is no evidence or in rare cases where the combination of evidence isn’t associated with any specific condition.

Ranking limiting

To prevent reverse-engineering of our models, we limit the number of conditions returned from /diagnosis.

Most notably, if the list of evidence is shorter than 3, only one condition will be returned. In the case of longer evidence lists, /diagnosis can return up to 20 conditions, depending on the probability distribution of the conditions in the rankings.

Patient Education

For some conditions, patient education articles may be available, as shown by the has_patient_education attribute in condition_details] being set to true. Patient education articles are documents compiled by our medical experts that contain detailed information about a condition’s causes, the usual diagnostic process, possible care methods etc., and can be presented to a patient to give them a deeper understanding of the conditions that are returned in the interview process. For more information, please refer to the Patient Education section.

Stop recommendation

Once enough information has been collected, the interview should be stopped. To help you decide when to stop asking further questions, we’ve provided the stop condition recommendation. This feature uses a heuristic algorithm which takes into account the number of questions asked and the confidence of the current analysis' results.

⚠️

The stop recommendation will be available only if you indicated at least one initial evidence (see Gathering initial evidence).

If should_stop is true, the stop condition has been reached. False means that the interview should be continued. If the attribute is not available at all, either you haven’t specified the initial evidence or the stop recommendation could not be proposed.

⚠️

It is possible to finish the interview earlier if has_emergency_evidence is true even if should_stop is false. This is acceptable when the urgency of the case is sufficient for the end-user and quick treatment is needed. However, we recommend continuing the interview until the should_stop attribute is true to achieve the most accurate results regarding the list of most probable conditions and/or triage level. Please note that in some cases, even if has_emergency_evidence is true, the triage level could still be increased from emergency to emergency_ambulance).

Response extras

The extras attribute in response object is empty by default, but can be used to return additional or experimental attributes for custom models or selected partners.

Extras

Both the request and response objects may contain an extras attribute. In requests, extras may include additional or experimental options to control the behavior of the inference engine. These options can also influence other endpoints that accept the same request object, such as /suggest or /triage.

ℹ️

Note that providing invalid or non-existent options will not produce an error.

enable_symptom_duration

This option introduces a new type of question and an additional evidence attribute to support specifying the duration of an observation. To enable it:

JSON
"extras": {
  "enable_symptom_duration": true
}

This option introduces the duration question type, which differs from other types by replacing the items attribute with evidence_id in the question object:

JSON
"question": {
  "type": "duration",
  "text": "How long have you had stomach pain?",
  "evidence_id": "s_13",
  "extras": {}
}

When responding to a duration type question, the answer should add a single object to the evidence list. This object must use the evidence_id value as its id and include a duration object, apart from the typical choice_id. The duration object consists of two attributes:

  • value – a required numeric value that specifies the duration,
  • unit – a required duration unit, one of: week, day, hour, or minute.
JSON
"evidence": [
  {
    "id": "s_13",
    "choice_id": "present",
    "duration": {
      "value": 2,
      "unit": "day"
    }
  }
]

The enable_symptom_duration option also allows for the use of the duration object with initial evidence (observations marked with "source": "initial"). While this addition is optional, including duration information for initial symptoms can enhance the inference process and potentially shorten the duration of the interview.

interview_mode

This option allows you to control the behavior of the question selection algorithm. The interview mode may have an influence on the duration of the interview as well as the sequencing of questions.

Currently, the following interview modes are available:

  • default – suitable for symptom checking applications, providing the right balance between duration of interview and accuracy of the presented results,
  • triage – suitable for triage applications that prioritize providing a safe triage level rather than attaining the highest possible accuracy for the final list of most probable conditions. If accuracy is a priority, this mode should not be used. The interview process is also shorter in cases where a patient is suspected of having a condition that requires emergency-level support as the interview is then stopped and the results are shown right away.
JSON
"extras": {
  "interview_mode": "triage"
}
⚠️

Interview modes can only be used when 5-level triage is enabled.

enable_explanations

This optional functionality is designed to provide additional descriptions, enhancing user understanding of each question's meaning.

JSON
"extras": {
  "enable_explanations": true
}

When enabled, explanations can be attached to questions or question items. For single type questions, explanations are added at the question level. In group questions, they are attached to individual question items. Two additional attributes are included at both levels:

  • explication – additional context or description of the observation in question,
  • instruction – a list of steps to confirm the symptom's presence.
JSON
"question": {
  "type": "single",
  "text": "Does she have a fever?",
  "explication": "The most accurate measurement is rectal temperature. You can also measure the temperature under the armpit or use contactless thermometers. However, using the contact thermometer is more accurate. Follow the thermometer manufacturer's instructions. A temperature above 37°C or 98.6°F is classified as a fever.",
  "instruction": [
    "Clean the thermometer using soap and cold water.",
    "For rectal insertion, apply a small amount of petroleum jelly to the end of the thermometer.",
    "Insert the silver tip into the anus or mouth.",
    "Hold the thermometer in place for 2 minutes, or as long as recommended by the manufacturer."
  ],
  "items": [
    {
      "id": "s_98",
      "name": "Fever",
      "explication": null,
      "instruction": null,
      "choices": [
        {
          "id": "present",
          "label": "Yes"
        },
        {
          "id": "absent",
          "label": "No"
        },
        {
          "id": "unknown",
          "label": "Don't know"
        }
      ]
    }
  ],
  "extras": {}
}
ℹ️

Explanations are optional and may not be present for every question or question item. In such cases, null is returned.

enable_third_person_questions

With this option, the interview can be configured to allow users to answer questions on behalf of someone else. By setting this option to true, each question from the /diagnosis endpoint will be formulated in the third person.

JSON
"extras": {
  "enable_third_person_questions": true
}
⚠️

If the model in use does not support third person questions, a 400 Bad Request error will be returned with the message: "message": "Third person question not supported."

include_condition_details

Enabling this option adds an additional condition_details object to each condition in the conditions list. This object contains attributes that have the same values and meanings as those in the output of the /conditions endpoint:

  • icd10_code
  • category
  • prevalence
  • acuteness
  • severity
  • triage_level
  • hint
  • has_patient_education
JSON
"extras": {
  "include_condition_details": true
}
JSON
"conditions": [
  {
    "id": "c_255",
    "name": "Tetanus",
    "common_name": "Tetanus",
    "probability": 0.3118,
    "condition_details": {
      "icd10_code": "A35",
      "category": {
        "id": "cc_16",
        "name": "Infectiology"
      },
      "prevalence": "very_rare",
      "severity": "severe",
      "acuteness": "acute",
      "triage_level": "emergency_ambulance",
      "hint": "You may need urgent medical attention! Call an ambulance.",
      "has_patient_education": false
    }
  }
]

Advanced options

The options listed below can be useful in certain use cases, but are generally not recommended as they may have a negative impact on interview quality.

disable_intimate_content

This option is used to exclude "intimate" concepts, including those related to sexual activity.

JSON
"extras": {
  "disable_intimate_content": true
}

There are several safeguards to ensure appropriate presentation of content tagged as "intimate". If disable_intimate_content is not included, intimate content may be returned in responses. That said, questions considered very sensitive in nature – especially those about sexual behavior – will still require the user's explicit consent. This consent is facilitated via the Consent to a sexual interview symptom (s_2652). Setting the disable_intimate_content option to true ensures that no content tagged as "intimate" is included.

Please note that the criteria for what constitutes "intimate" content can change over time. For the most up-to-date information regarding this classification, we recommend contacting Infermedica.

disable_groups

Activating this option configures the /diagnosis endpoint to return only single type questions, effectively disabling questions of the group_single and group_multiple types. This can be particularly useful in interfaces where implementing group questions is challenging, such as in chatbots or voice assistants. However, as a general guideline, it is recommended to keep group questions enabled for more accurate and shorter interviews.

JSON
"extras": {
  "disable_groups": true
}

Deprecated options

These options are maintained for backward compatibility only and should not be used in new implementations.

disable_adaptive_ranking

Enabling this option reverts to the legacy method of displaying the ranking of possible conditions, used before the implementation of the current adaptive ranking system.

JSON
"extras": {
  "disable_adaptive_ranking": true
}

enable_triage_3

Activating this option switches back to the legacy 3-level triage system. For detailed information, please refer to the /triage endpoint documentation.

JSON
"extras": {
  "enable_triage_3": true
}

Alternative use cases

While an interactive symptom checker (e.g. mobile application, chatbot or voice assistant) in which the user is presented with a series of medical questions is the most recognizable use case, there are other valid uses of the /diagnosis endpoint.

The /diagnosis endpoint can be used to provide context-aware decision support, e.g. when paired with /parse to analyze patient notes, or integrated into an EHR-like system to provide instant insights about possible causes or subsequent care steps. In such cases, only one call to /diagnosis is usually required, as all the evidence is known in advance and there is no direct contact with the patient.

When /diagnosis is used with custom models, there are even more possibilities. We've seen /diagnosis used to qualify patients for clinical trials, to assess the risks of post-operational complications, and to support operators of medical call centers.