Prostate cancer diagnosis is performed by pathologists through the analysis of tissue samples from the prostate gland using a microscope. The development of automatic acquisition and digitalization technologies has allowed the construction of large collections of digitalized histopathology slides, that are usually accompanied by clinical information and other types of metadata. This collection of cases, along with the metadata, has the potential to be an invaluable resource for the analysis of new challenging cases supporting diagnosis, prognosis, and theragnosis decision tasks. This paper presents a multimodal retrieval system based on a supervised multimodal kernel semantic embedding model that supports the search of relevant cases in a multimodal database, combining both images, i.e. histopathology slides, and text, i.e. pathologist's reports. The system was tested in a multimodal prostate adenocarcinoma dataset, composed of whole slide images of tissue samples, pathologist's reports and gradation information using the Gleason score. The system shows a high performance for multimodal information retrieval with a Mean Average Precision of 0.6263.