Rali.iro.umontreal.ca

Florian Boudin, Lixin Shi, and Jian-Yun Nie Abstract. Without a well formulated and structured question, it canbe very difficult and time consuming for physicians to identify appro-priate resources and search for the best available evidence for medicaltreatment in evidence-based medicine (EBM). In EBM, clinical studiesand questions involve four aspects: Population/Problem (P), Interven-tion (I), Comparison (C) and Outcome (O), which are known as PICOelements. It is intuitively more advantageous to use these elements inInformation Retrieval (IR). In this paper, we first propose an approachto automatically identify the PICO elements in documents and queries.
We test several possible approaches to use the identified elements in IR.
Experiments show that it is a challenging task to determine accuratelyPICO elements. However, even with noisy tagging results, we can stilltake advantage of some PICO elements, namely I and P elements, toenhance the retrieval process, and this allows us to obtain significantlybetter retrieval effectiveness than the state-of-the-art methods.
Physicians are educated to formulate their clinical questions according to severalwell defined aspects in evidence-based medicine (EBM): Population/Problem(P), Intervention (I), Comparison (C) and Outcome (O), which are called PICOelements. The PICO structure is commonly used in clinical studies [7]. In manydocuments in medical literature, one can find the PICO structure, which is,however, often implicit and not explicitly annotated. To identify documents cor-responding to a patient’s state, physicians also formulate their clinical ques-tions in PICO structure. For example, in the question “In children with anacute febrile illness, what is the efficacy of single-medication therapy with ac-etaminophen or ibuprofen in reducing fever? ” one can identify the following el-ements: P ⇒“children with acute febrile illness”, I ⇒“single-medication therapywith acetaminophen”, C ⇒“ibuprofen” and O ⇒“efficacy in reducing fever ”.
Using a well-formulated question according to the PICO structure facilitates searching for a precise answer within a large medical citation database [10].
However, using PICO structure in Information Retrieval (IR) is not as straight-forward as it seems. It requires first the identification of the PICO elements in the documents, as well as in the question if these elements are not explicitlyseparated in it. Several studies have been carried out on identifying PICO ele-ments in medical documents, and to use them in IR [5, 4]. However, these studiesare limited in several aspects. First, many studies on identification of PICO ele-ments are limited to some segments of the medical documents (e.g. Method) [4],and in most cases, the test collection is very small (a few hundreds abstracts).
It is difficult to see whether one can easily identify PICO elements in all partsof medical documents in a large collection. Secondly, there have been very fewtests on IR using PICO elements [5]. This is due to the lack of a standard testcollection with questions in PICO structure. IR tests have been carried out onsmall test collections, and in many cases, not compared to the traditional IRmethods. It is not clear whether IR based on PICO structure is more effectivethan traditional IR approaches.
In this paper, we propose an approach to perform IR using PICO elements.
The identification of these elements is cast as a classification task. A mixture ofknowledge-based and statistical techniques is employed to extract discriminantfeatures that once combined in a classifier will allow us to identify clinicallyrelevant elements in medline abstracts. Using these detected elements, we showthat the information retrieval process can be improved. In particular, it turnsout that the I and P elements should be enhanced in retrieval. The remainderof this paper is organized as follows. In the next section, we give an overview ofthe related work. Then, we present our classification approach to identify PICOelements in documents. Next, IR experiments using these elements are reported.
Finally, we draw some conclusions.
The first aspect of this study concerns the identification of PICO elements inmedical documents. Several previous approaches have already proposed to cat-egorize sentence types in medical abstracts using classification tools. [8] showedthat Machine Learning can be applied to label structural information of sen-tences (i.e. Introduction, Method, Results and Conclusion). Thereafter, [5] pre-sented a method that uses either manually crafted pattern-matching rules or acombination of basic classifiers to detect PICO elements in medical abstracts.
Prior to that, biomedical concepts are labelled by Metamap [2] while relationsbetween these concepts are extracted with SemRep [9], both tools being basedon the Unified Medical Language System (UMLS). Using these methods, theyobtained an accuracy of 80% for Population and Intervention, 86% for Problemand between 68% and 95% for Outcome. However, it is difficult to generalize thisresult, as the test was done on a very small dataset: 143 abstracts for outcomeand 100 abstracts for other elements.
Recently, supervised classification was proposed by [6] to extract the num- ber of trial participants. Results reported in this study show that the SupportVector Machine (SVM) algorithm achieves the best results with an f-measureof 86%. Again, it has to be noted that the testing data, which contains only 75 highly topic-related abstracts, is not representative of a real world task. In alater study, [4] extended this work to I and O elements using Conditional Ran-dom Fields (CRF). To overcome data sparseness, PICO structured abstractswere automatically gathered from medline to construct an annotated testingset (318 abstracts). This method showed promising results: f-measure of 83% forI and 84% for O. However, this study has been carried out in a limited context:elements are only detected within the Method section, while several other sec-tions such as Aim, Conclusion, etc. are discarded. It is not clear whether theidentification of PICO elements in the whole document can lead to the samelevel of performance. In this study, we do not restrict ourselves to some of thesections in documents, but try to identify elements in the whole documents.
On the retrieval aspect, there have been only a few studies trying to use PICO elements in IR and compare it to traditional methods. [5] is one of thefew such studies. The method they describe consists in re-ranking an initial list ofretrieved citations. To this end, the relevance of a document is scored by the useof detected PICO elements, in accordance with the principles of evidence-basedmedicine (i.e. quality of publication or task specificity are taken into consider-ation). Several other studies aimed to build a Question-Answering system forclinical questions [1]. But again, the focus has been set on the post-retrievalstep, while the document retrieval step only uses a standard IR approach. Inthis paper, we argue that IR has much to gain by using PICO elements.
Although the retrieval effectiveness is reported in some studies using PICO elements, it is yet to be proved that a PICO-based retrieval approach will alwaysproduce better effectiveness than the traditional IR methods. In this study, wewill examine the effect of using PICO elements in the retrieval process in severalways and compare them to the traditional IR models. In the next section, let usstart with the first step: identifying PICO elements in medical documents.
Identification of PICO elements in documents PICO elements are often implicitly described in medical documents. It is im-portant to identify them automatically. One can use linguistic patterns for this.
However, a pattern-based approach may require a large amount of manual work,and the robustness has yet to be proved on large dataset. In this study, we willrather use a more robust statistical classification approach, which requires a min-imal amount of manual preparation. There may be two levels of classification:one can identify each PICO element in the document, whether it is described bya word, a phrase or a complete sentence; one can also make a coarser-grain an-notation – to annotate a sentence as describing only one of the PICO elements.
The second method is much simplified. Nevertheless, while the first classificationis very difficult, the second one is easier to implement. Moreover, for the purposeof IR, a coarse-grain classification may be sufficient.
Construction of training and testing data Even for a coarse-grain classification task, we are still lack of a standard testcollection with PICO annotated elements. This increases the difficulty of devel-oping and testing an automatic tool that tags these elements. This is also thereason why previous studies have focused on a small set of abstracts for testing.
We notice that many recent documents in PubMed1 do contain explicit headingssuch as “PATIENTS”, “SAMPLE” or “OUTCOMES”, etc. The sentences underthe “PATIENT” and “SAMPLE” headings describe the P elements, and thoseunder the “OUTCOMES” heading describe the O elements. Below is a segmentof a document extracted from PubMed (pmid 19318702): . PARTICIPANTS: 2426 nulliparous, non-diabetic women at term, with a single- ton cephalic presenting fetus and in labour with a cervical dilatation of less than 6 cm. INTERVENTION: Consumption of a light diet or water during labour. MAIN OUTCOME MEASURES: The primary outcome measure was spontaneous vaginal delivery rate. Other outcomes measured included duration of labour .
We collect a set of roughly 260K abstracts from PubMed by stating the limi- tations: published in the last 10 years, Humans, Clinical Trial, Randomized Con-trolled Trial, English. Then, structured abstracts containing distinctive sentenceheadings are selected and these sentences marked with the corresponding PICOelements. We notice that both Intervention and Comparison elements belong tothe same semantic group and are often described under the same heading. Wethen choose to group the corresponding segments into the same set. From the en-tire collection, three sets of segments have been extracted: Population/Problem(14 279 segments), Intervention/Comparison (9 095) and Outcome (2 394). Notethat abstracts can also contain sentences under other headings, which we do notinclude in our extraction process. Therefore, it is possible that no Outcome is ex-tracted from a document by our process. This conservative extraction approachallows us to obtain a dataset with as little noise as possible.
Prior to classification, each sentence undergoes pre-processing treatments thatreplace words into their canonical forms. Alpha-numeric numbers are convertedto numeric numbers while each word appearance in a series of manually craftedcue-words/verbs lists is investigated. The cue-words and cue-verbs are deter-mined manually. Some examples are shown below: Cue-verbs: recruit (P), prescribe (I), assess (O)Cue-words: group (P), placebo (I), mortality (O) On top of that, three semantic type lists, generated from the MeSH2 ontology, are used to label terms in sentences. These lists are composed with entry termscorresponding to a selection of subgroups belonging to semantic types “Living 1 http://www.ncbi.nlm.nih.gov/pubmed/2 http://www.nlm.nih.gov/mesh/ Beings”, “Disorders” and “Chemicals & Drugs”. The final set of features weuse to classify sentences is: sentence’s position† (absolute, relative); sentence’slength†; number of punctuation marks†; number of numbers† (≤10, >10); wordoverlap with title†; number of cue-words ; number of cue-verbs ; MeSH semantictypes . Both statistical (marked with †) and knowledge-based (marked with )features are extracted. Using naive statistical features such as the number ofpunctuation marks is motivated by the fact that authors normally conceive theirabstracts according to universally accepted rules that govern writing styles.
Tagging each document consists in a three steps process. First, the document issegmented into plain sentences. Then each sentence is converted into a featurevector using the previously described feature set. Finally, each vector is submit-ted to multiple classifiers, one for each element, allowing label the correspondingsentence. We use several algorithms implemented in the Weka toolkit3: J48 andRandom forest (decision trees), SVM (radial kernel of degree 3), multi-layerperceptron (MLP) and Naive Bayes (NB). For comparison, a position classifier(BL) was included as baseline in our experiments. This baseline method is mo-tivated by the observation that PICO statements are typically found in specificsections of the abstract, which are usually ordered in Population/Problem, Inter-vention/Comparison and Outcome. Therefore, the relative position of a sentencecould also reasonably predict the PICO element to which it is related. Similarmethod to define baseline has been used in previous studies [8].
For each experiment, we report the precision, recall and f-measure of each PICOclassifier. To paint a more realistic picture, 10-fold cross-validation is used foreach algorithm. Moreover, all sentence headings were removed from data setsconverting all abstracts into unstructured ones. This treatment allows us totake a stand on a real-world scenario by avoiding biased values for featuresrelying on cue-words lists. The output of our classifiers is judged to be correct ifthe predicted sentence corresponds to the labelled one. Performance of the fiveclassification algorithms on each data set is shown in Table 1. Not one classifieralways outperforms the others but the multi-layer perceptron (MLP) achievesthe best f-measure scores and SVM the best precision scores. We have performedmore experiments on SVM with different kernels and settings. Best scores areobtained with a radial kernel of degree 3, other kernels giving lower scores orsimilar performance with higher computational costs.
As classifiers perform differently on each PICO element, in the second series of experiments, we use three strategies to combine classifier’s predictions. Thefirst method (F1) uses voting: sentences that have been labelled by the majority 3 http://www.cs.waikato.ac.nz/ml/index.html of classifiers are considered candidates. In case of ambiguity (i.e. multiple sen-tences with the same number of votes), the average of the prediction scores isused to make a decision. The second and third methods compute a linear combi-nation of the predicted values in an equiprobable scheme (F2) and using weightsempirically fixed according to the observed f-measure ranking (F3) (i.e. for theP element: 5 for MLP, 4 for RF, 3 for J48, 2 for SVM and 1 for NB). Resultsare also shown in Table 1. Combining multiple classifiers using F3 achieves thebest results with a f-measure score of 86.3% for P, 67% for I and 56.6% for O.
Table 1. Performance of each classifier and their fusing strategies in terms of precision(P ), recall (R) and f-measure (F ).
One can see that O or I elements are more difficult to identify than P el- ements. The reason is not exclusively due to the decreasing amount of train-ing data but mainly to the task complexity. Indeed, I elements are often miss-detected because of the high number of possible candidates. Terms belongingto the semantic groups usually assigned as I (e.g. drug names) are scatteredthroughout the abstract. Another reason is the use of specific terms occurringin multiple PICO elements. For example, although treatments are highly re-lated to intervention, they can also occur in other elements. In the following IRexperiments, we will use the F3 (best results) tagging strategy.
Using PICO elements in information retrieval The language modeling approach to Information Retrieval models the idea that adocument is a good match to a query if the document model is likely to generatethe query. Most language-modeling work in IR use unigram language models –also called bags-of-words models– assuming that there is no structure in queriesor documents. A typical way to score a document D as relevant to a query Q isto use the Kullback-Leibler divergence between their respective language models: where p(t | MQ) and p(t | MD) are (unigram) language models of the query anddocument respectively. Usually, the query model is simply estimated by Maxi-mum Likelihood Estimation over the query words, while the document model issmoothed (e.g. using Dirichlet smoothing) to avoid zero probabilities problem.
We propose several approaches that extend the basic LM approach to take intoconsideration the PICO element annotation. According to the PICO tagging,the content of queries and documents is divided into the following four fields:Population and Problem (P), Intervention/Comparison (I), Outcome (O), andOthers (X). Let us use the following notation: QAll = QP + QI + QO + QX forthe query Q and DAll = DP + DI + DO + DX for the document D. In case ofmissing tagging information, the basic bag-of-words model is used.
We try to assign an importance (weight) to each of the PICO elements. Intuition-ally, the more important is a field, the higher should be its weight. We proposethe following two models by adjusting the MQ weighting: Model-1T: adjusting weights on PICO element (term) level. The query modelis re-defined as follows: where wQ,E is the weight of query field E; δ(QE, t) = 1 if t ∈ QE, 0 otherwise;γ is a normalization factor. The score function of this model, namely score1T ,is obtained by replacing the p(t | MQ) by p1(t | MQ) in Equation (1).
Model-1F: adjusting weights on PICO field level. Four basic models for DALL,DP , DI and DO are created. The final score is their weighted linear interpolationwith wQ,E: We assume each field in the tagged document has a different importance weightwD,E. The document model is redefined as follows: where γ is a normalization factor, and p(t | DE) uses the Dirichlet smoothingfunction. We denote this model by Model-2, and the score2 is obtained by re-placing p(t | MD) by p2(t | MD) in Equation (1).
4.1.3 Using PICO tags in both queries and documents Model-3T: enhancement at the term level. The query model is redefined as incase 1 and document model is redefined as in case 2.
Model-3F: enhancement at the field level. This is the combination of Model-2and Model-1F.
In all our models, there are a total of 6 weighting parameters, 3 for queries(wQ,P , wQ,I , wQ,O) and 3 for documents (wD,P , wD,I , wD,O).
PICO elements may be manually marked in queries by the user. This is, however,not a realistic situation. More likely, queries will be formulated as a free sentenceor phrases. Identifying PICO elements in a query is different from what we didon documents because we need to classify smaller units. In this paper, we adopta language model classification method [3], which is an extension to Na¨ıve Bayes.
The principle is straightforward: Let P, I and O be the classes. The score of aclass ci for a given term t is estimated by p(t | ci) · p(ci). The probability p(ci)can be estimated by the percentage of training examples belonging to class ciand p(t | ci) by maximum likelihood with Jelinek-Mercer smoothing: pJM (t | ci) = (1 − λ) · p(t | ci) + λ · p(t | C) where C is the whole collection and λ is smoothing parameter.
The above approach requires a set of classified data in order to construct the LM of each class. To this end, we use the sentences classified by the previouslydescribed approach (see Section 3). Usually, users prefer to select importantterms as their queries. As a consequence, queries should contain more PICOelements than documents. Therefore, we assume that each query term belongsto one of the P, I, or O classes. Performance of the classification method iscomputed over a set of 52 queries (corpus described in Section 5) by comparisonto a manual tagging and experimented on different values of the parameter λ.
Best results are obtained for λ set to 0.5 with an f-measure of 77.8% for P, 68,3%for I and 50% for O.
We gathered a collection of 151,646 abstracts from PubMed by searching forthe keyword “diabetes” and stating the following limitations: Humans and En-glish language. The average length of the documents is 276 words. The tagging time spent by our fusing strategy (see Section 3) was approximately one houron a standard desktop computer. For queries, we use the Cochrane systematicreviews4 on 10 clinical questions about “diabetes”. All the references in the “In-cluded” studies are judged to be relevant for the question. These included studiesare selected by the reviewer(s) (the author(s) of the review article) and judgedto be related to the clinical question. As these studies are published prior tothe review article, we only try to retrieve documents published before the re-view’s publication date. From the selected 10 questions, medical professionals(professors in family medicine) have formulated a set of 52 queries. Each queryhas been manually annotated according to the following elements, which extendthe PICO structure: Population (P), Problem (Pr), Intervention (I), Compar-ison (C), Outcome (O), and Duration (D). However, in our experiments, wewill use a simplified tagging: P and Pr are grouped together (as P ), C and Dare discarded. Below are some of the alternative formulations of queries for thequestion “Pioglitazone for type 2 diabetes mellitus”: In patients(P ) | with type 2 diabetes(P r) | does pioglitazone(I) | compared toplacebo(C) | reduce stroke and myocardial infarction(O) | 2 year period (D) In patients(P ) | with type 2 diabetes who have a high risk of macrovascularevents(P r) | does pioglitazone(I) | compared to placebo(C) | reduce mortality(O) The resulting testing corpus is composed of 52 queries (average length of 14.7 words) and 378 relevant documents. In our experiments, we will try to answerseveral questions: does the identification of PICO elements in documents and/orin queries helps in IR? and in the case of a positive answer, how should theseelements be used in the retrieval process? We first tested a na¨ıve approach that matches the tagged elements in the querywith the corresponding elements in the documents, i.e. each PICO tag definesa field, and terms are allowed to match within the same field. However, thisapproach quickly turns out to be too restrictive. This restriction is amplified bythe low accuracy of PICO tagging. Therefore, we will not consider this methodas baseline but the two following instead: Boolean model: This is the search mode widely used in medical domain. Usu-ally, a user will construct a Boolean query iteratively by adding and modifyingterms in the query. We simulate this process by creating a conjunction of all thewords. Queries created in this way may be longer than what a physician wouldconstruct. Boolean retrieval resulted in a MAP of 0.0887 and a P@10 of 0.1885.
Language model: This is one of the state-of-the-art approaches in current IRresearch. In this method, both a document and a query are considered as bag-of-words, and no PICO structure is considered. The LM approach resulted in aMAP of 0.1163 and a P@10 of 0.25. This is the baseline we will compare to.
In this first series of experiments, we consider the detected PICO elements indocuments while the queries are considered as bag-of-words. During the retrievalprocess, each element E, E ∈ {P, I, O}, is boosted by a corresponding weightwD,E. We begin by setting weights to 0.1 to see the impact of boosting each ele-ment alone. Table 2 shows that when these elements are enhanced, no noticeableimprovement is obtained. We then try different combinations of weighting pa-rameters from 0 to 0.9 by steps of 0.1. The best improvement remains very small(wD,P = 0.5/wD,I = 0.2/wD,O = 0) and in most cases, we get worse results.
Table 2. MAP scores for Model-2 (without query tagging, : wD,P = 0.5, wD,I = 0.2).
The above results show that it is not useful to consider PICO elements only in documents, while using a query as bag-of-words. There may be several reasons forthis. First, the accuracy of the automatic document tagging may be insufficient.
Second, even if elements are correctly identified in documents, if queries aretreated as bags-of-words then any PICO element can match with any identicalword in the query, whether it describe the same element or not. In this sense,identifying elements only in documents is not very useful.
Now, we consider PICO tagging in both queries and documents. For simplicity,the same weight is used for queries and documents. In this series of tests, we usemanual tagging for the queries and automatic tagging for documents. Results inTable 3 show the best figure we can obtain using this method. We can see that byproperly setting the parameters, the retrieval effectiveness can be significantlyimproved, in particular when I elements are set to a relatively high weight, Pelements to a medium one, and no enhancement to O. This seems to indicatethat the I element is the most important in medical search (at least for thequeries we considered). This is consistent with some previous studies on IRusing PICO elements. In fact, [11] suggested firstly using I and P elements toconstruct Boolean queries; and only if too many results are obtained that otherelements should be considered.
0.1442 (+24.0%‡) 0.1452 (+24.8%‡) 0.1514 (+30.2%‡) 0.1522 (+30.9%‡) 0.3173 (+26.9%‡) 0.3404 (+36.1%‡) 0.3538 (+42.7%‡) 0.3577 (+23.0%‡) Table 3. Performance measures for Model-1T, Model-3T (w·,P 0.9/w·,O = 0), Model-1F and Model-3F (w·,P = 0.1/w·,I = 0.3/w·,O = 0) (‡: t.test< 0.01). Increase percentage over baseline is given in parentheses.
The question now is: can we determine reasonable weights automatically? Weuse cross-validation in this series of exepriments to test this. We have dividedthe 52 tagged queries into two groups: Q26A and Q26B. A grid search (from 0to 1 by step of 0.1) is used to find the best parameters for Q26A, and test onQ26B, and vice versa. Results are shown in Table 4. The best parameters foundfor Q26A in Model-1T are wQ,P = 0.6/wQ,I = 0.9/wQ,O = 0 (MAP = 0.1688,P@10 = 0.2269), and for Q26B are wQ,P = 0/wQ,I = 0.9/wQ,O = 0 (MAP =0.1301, P@10 = 0.4192). Similar for Model-1F, the best parameters for Q26Aare wQ,P = 0.2/wQ,I = 0.3/wQ,O = 0 (MAP = 0.1784, P@10 = 0.2308), and forQ26B are wQ,P = 0/wQ,I = 0.3/wQ,O = 0 (MAP = 0.1350, P@10 = 0.4808).
The experiments in Table 4 show that by cross-validation, we can determineparameters that lead to a retrieval accuracy very close to the optimal settings.
Table 4. Performance measures in cross-validation (train→test) for Model-1T andModel-1F, queries are manually annotated.
Previous results show that query tagging leads to better IR accuracy. The ques-tion is whether this task, if performed automatically, still leads to improvements.
Compared to manual annotation, automatic query tagging also works well evenwith low tagging accuracy (Table 5). One explanation may be that the manualtagging is not always optimal. For example, the query “In patients with type 2diabetes(P ); pioglitazone(I); reduce cardiovascular events adverse events mortality im- prove health related quality life(O)” is automatically tagged as “patients type 2 di-abetes cardiovascular health(P ); pioglitazone reduce(I); events adverse events mortality improve related quality life(O)”. The average precision for this query is improvedfrom 0.245 to 0.298. Intuitionally, tagging cardiovascular as P seems to be bet-ter than O even if it is not necessarily more correct. However, one also has toconsider the utilization of it. By marking cardiovascular as P, this concept willbe more enhanced, which in this case turns out to be more beneficial.
Table 5. Performance measures for Model-1F (wQ,P = 0.1/wQ,I = 0.3) PICO is a well defined structure widely used in many medical documents whichcan also be used to formulate clinical questions. However, few systems have beendeveloped to allow physicians to use PICO structure effectively in their search.
In this paper, we have investigated the utilization of PICO elements in medicalIR. We first tried to identify these elements in documents and queries, then aseries of models have been tested to compare different utilizations of them.
Our experiments on the identification of PICO elements showed that the task is very challenging. Our classification accuracy is relatively low. This maylead one to think that the identification result is not useable. However, ourexperiments on IR showed that significant improvements using PICO elementscan be achieved, despite the relatively low accuracy. This shows that we donot need a perfect identification of PICO elements before using them. IR cantolerate a noisy identification result. The key problem is the correct utilizationof the tagging results. In our experiments, we have found that enhancing somePICO elements in queries (and in documents) leads to better retrieval results.
This is especially true for the I and P elements.
1. Andrea Andrenucci. Automated Question-Answering Techniques and the Medical Domain. In HEALTHINF, pages 207–212, 2008.
2. A.R. Aronson. Effective Mapping of Biomedical Text to the UMLS Metathesaurus: The MetaMap Program. In AMIA Symposium, 2001.
3. J. Bai, J.Y. Nie, and F. Paradis. Using language models for text classification. In Asia Information Retrieval Symposium (AIRS), Beijing, China, 2004.
4. G. Chung. Sentence retrieval for abstracts of randomized controlled trials. BMC Medical Informatics and Decision Making, 9(1):10, 2009.
5. D. Demner-Fushman and J. Lin. Answering clinical questions with knowledge- based and statistical techniques. Computational Linguistics, 33(1):63–103, 2007.
6. M.J. Hansen, N.O. Rasmussen, and G. Chung. A method of extracting the number of trial participants from abstracts describing randomized controlled trials. Journalof Telemedicine and Telecare, 14(7):354–358, 2008.
7. W.R. Hersh. Information retrieval: a health and biomedical perspective. Springer Categorization of Sentence Types in Medical 9. T.C. Rindflesch and M. Fiszman. The interaction of domain knowledge and linguis- tic structure in natural language processing: interpreting hypernymic propositionsin biomedical text. Journal of Biomedical Informatics, 36(6):462–477, 2003.
10. C. Schardt, M. Adams, T. Owens, S. Keitz, and P. Fontelo. Utilization of the PICO framework to improve searching PubMed for clinical questions.
Informatics and Decision Making, 7(1):16, 2007.
How to answer your clinical questions more efficiently. Family practice management, 12(7):37, 2005.

Source: http://rali.iro.umontreal.ca/rali/sites/default/files/publis/boudin_ecir10.pdf

Patent-antitrust.qxp

Second Circuit Affirms Tamoxifen Dismissal On November 2, 2005, the United States Court of“authorized generic,” particularly in the context of theAppeals for the Second Circuit affirmed the dismissal ofSupreme Court’s recent request to the Department of Justicecomplaints filed by consumers and others in civil antitrustfor its views on a petition for certiorari filed by the Feder

access.ewu.edu

Final Report – 2008-09 Research & Creative Works Grants Principal Investigator: Nicholas E. Burgis, Chemistry/Biochemistry Dept. Project Title: Developing a Less Toxic Chemotherapeutic Cocktail to Treat Multiple Myeloma. This report serves to document the scholarly research activities performed by the principal investigator and collaborators during the term of the Faculty Research Grant

Copyright © 2010-2018 Pharmacy Drugs Pdf