-
Notifications
You must be signed in to change notification settings - Fork 29
Reasoner Evaluation
Jordan Wyrwa, DO edited this page Apr 24, 2020
·
5 revisions
On this page, we will describe the evaluation we performed in order to understand the impact of applying different reasoners.
Using the following reviews (shown below), we selected reasoners that met the following criteria:
- Low response time
- Available via the OWLAPI
- Open source
Khamparia A, Pandey B. Comprehensive analysis of semantic
web reasoners and tools: a survey. Education and Information
Technologies. 2017 Nov 1;22(6):3121-45.
Parsia B, Matentzoglu N, Gonçalves RS, Glimm B, Steigmiller A.
The OWL reasoner evaluation (ORE) 2015 competition report.
Journal of Automated Reasoning. 2017 Dec 1;59(4):455-82.
Reasoner | Language | OWLTools |
---|---|---|
ELK | EL | Yes |
ELepHant | EL | No |
Pellet | DL | Yes |
RACER | DL | No |
FACT++ | DL | No |
Chainsaw | DL | No |
Konclude | DL | No |
Crack | DL | No |
TrOWL | DL+EL | No |
MORe | DL+EL | No |
-
Benchmark each of the algorithms on HPO+Imports
- Run-time
- Justifications
- Count of inferred axioms
- Consistency
-
For all algorithms that pass the benchmark, run them against PheKnowLator
- Including disjointness axioms
- Excluding disjointness axioms
-
Clinician evaluation by Jordan Wyrwa, DO
- Create a spreadsheet of the inferred axioms by algorithm and mark them as:
- Correct or Incorrect
- Definitely Clinically relevant, Maybe clinically relevant, or not clinically relevant
- Create a spreadsheet of the inferred axioms by algorithm and mark them as: