3D workstation face-off puts seasoned users in the hot seat

September 1, 2006

High noon in the auditorium finds imagers grappling with challenging clinical cases as the clock ticks during fourth annual event

High noon in the auditorium finds imagers grappling with challenging clinical cases as the clock ticks during fourth annual event

Spotlights shone on a handful of men sitting in a row on a stage. The host was amiable but challenging. Under the glare, the contestants seemed nervous. Some cracked personal jokes about ex-girlfriends in a stab at comic relief. Close-ups of anatomical parts beamed on the big screen overhead.

It could have been some TV dating game show. But the troupe of sophisticated imaging workstations and larger-than-life scenes of severe calcification gave away the dramatic event's true purpose. This was the fourth annual 3D workstation face-off, held at the Stanford Multidetector-Row CT symposium in San Francisco in June.

The experienced end-users competing in the face-off confronted difficult cases under tight time constraints, in the process showing how various workstations perform in an intense environment. One by one, participants paraded the capabilities of products from six vendors: Barco, GE, Philips, Siemens, TeraRecon, and Vital Images.

The 2006 event ran over to nearly four hours, yet many in the darkened auditorium stayed until the bitter end of a very long day. Afterward, some attendees called the experience fun and entertaining. The "fun," of course, lay in what the radiologists on the stage half-jokingly referred to as their "suffering" in the face of clinical obstacles.

'A LITTLE DANCE'

Participants had been given four difficult cases in advance of the event to aid them in preparing for the competition. They had between five and seven minutes to complete tasks, including a six-minute assessment of metastatic breast cancer with prior chest CT and PET/CT studies.

"It was like a little dance-if you miss a step, you are out of time," said Dr. Patrick Barr, the medical director at Dallas-based Southwest Diagnostic Center who piloted a Philips workstation on a nuclear medicine case. "It was a bit nerve-wracking, even though we couldn't see anyone in the audience. We had a short period of time to review a case. You really have to fly through it, and it had to be a well-choreographed process."

In addition to the breast cancer demo, three other cases were performed on stage:

- foot fracture (five minutes),

- bilateral calf claudication (seven minutes), and

- cardiac CT (seven minutes).

Among other detailed tasks, participants were required to provide measurements for stenosis quantification, lung nodule standard uptake value, and cardiac function. These figures were recorded by the moderator, Dr. Geoffrey Rubin, chief of cardiovascular imaging at Stanford. At the end of the event, measurements were compared to see if there were any matches between the findings of different users and workstations.

The cases were more challenging than those seen in a normal case load. In some situations, such as the cardiac CT, the image data were not of sufficient quality, said Dr. Lawrence Tanenbaum, section chief of MRI, CT, and neuroradiology at Edison Imaging Associates in New Jersey.

"We had to work around shortcomings in the data. In my routine work, 95 out of 100 cases are easy to process," said Tanenbaum, who demonstrated the GE workstation. "The face-off was one of the most stressful things I have ever done in my life. But I would sign up to do it again if they asked me to."

Another contestant, Dr. Brian Lucey, an assistant professor of radiology at Boston University School of Medicine, said the cases were indeed challenging, but clinical practice can also be rigorous.

"In the real world, not every study is done on a 20-year-old healthy patient. You deal with the data you are given. It's more of a challenge," he said.

BEYOND HYPE

The face-off helps users see beyond the marketing hype of vendors and witness workstations in action, Rubin said, who is also co-director of the CT symposium. Sophisticated observers tend to get the most out of the contest, picking up on the limitations and strengths of the software. Beyond its educational value, the face-off event serves other purposes.

"The face-off has many faces. End-users are just one constituency for which it is designed. We aim to push industry forward," he said.

Rubin said engineers and developers who work for the vendors pay close attention to proceedings. The events have an impact on future workstation design and development.

"We enjoy participating. The face-off is stressful, but it gives vendors great visibility in the marketplace. A lot of energy goes toward preparing physicians and software," said Todd Deckard, product manager at Vital Images. "It's an intense opportunity to see how our own product performs and to gain competitive intelligence about how other products perform. Our feet are held to the flame."

This year's face-off revealed that gaps in workstation quality and performance have narrowed, said Dr. Scott Lipson, who demonstrated the Vital Images workstation.

"A few years ago, there were substantial differences in strengths and weaknesses between the workstations. I think that the glaring deficiencies we saw a few years ago are more or less gone," said Lipson, a staff radiologist at Alta Bates Medical Center in Berkeley.

Some differences in functionality remain, however. For example, one workstation could not do automatic bone subtraction, a task required in one of the case studies.

The face-off also revealed large differences in the way workstations go about automatic vessel segmentation and analysis. All the products could do it, but they had different degrees of success, Lipson said.

MISMATCHES AT FINALE

At the end of the face-off, Rubin unveiled a chart showing comparisons of measurements provided by each workstation. Striking differences appeared in some of the measurements, such as assessment of lung nodules.

"Most of the workstations severely underestimated the lung nodule. The automatic tools tended not to pick up the whole nodule," Lipson said.

Although the case implied that the patient was responding to treatment, some of the workstations indicated that a tumor was getting bigger, and others showed it was shrinking. During the face-off, Lipson said, on top of the automatic measurement, it was necessary to manually redraw the contours to get what he thought was an accurate measurement.

Reference values were not provided for comparison. So while observers learned that there were big differences in measurements of workstations, it was not possible to tell which measurements were on target.

"There was such a big range. Some workstations were very close, and others were very far off. There may be problems with automatic tools. What good are automatic tools to measure and segment if they don't generate accurate information? We were not able to assess that, because we did not know how far we were from reality," Lipson said.

Differences in measurements, however, could reflect mistakes made by users, who were working under time pressure.

"Automatic tools don't work all the time for anybody. The variability of values underscores the fact that any time you use an automatic tool to generate a measurement, you have to reality-check it. You can't necessarily accept information a tech provides you," he said.

WORKSTATION ASSESSMENT

When assessing workstations, users typically consider power, functionality, and ease of use. Although the experienced users on the face-off stage made some of the tasks look easy, it may be more difficult for less experienced radiologists to achieve the same results in practice, Lipson said.

"Ease of use is almost as important as how powerful a workstation is," he said.

If radiologists use a workstation occasionally, ease of use becomes even more important, said Dr. Jeffrey Mendel, who presented a lecture on evaluation of workstation needs just prior to the face-off.

"If five or six doctors use a workstation once or twice a week, you need a very intuitive type of software. Some workstations offer more ease of use," said Mendel, chair of radiology at Caritas St. Elizabeth's Medical Center in Boston.

If only one or two radiologists use the workstation and do so regularly, however, ease of use becomes less important than power and functionality.

When choosing a workstation, it's important to evaluate strengths and weaknesses of each workstation, determine which two or three functions users will be performing most often, and pick a product that allows greatest ease of use for these particular functions (see accompanying article).

"If you have a practice that does a lot of CT angiography of lower extremities, you want processing software that gives you the best and easiest solution for visualization of vessels," Mendel said.

Prospective buyers are also advised to evaluate long-term support and upgrade costs. Some vendors are very generous in upgrading software, while others will charge more for new developments, he said.

Integration of CT software with PACS will also become increasingly important. Basic volume rendering is available in an integrated package, but most advanced processing for functions such as stenosis quantification and virtual colonography is currently being done on separate, stand-alone workstations.

"Most radiologists who are purchasing now are using a combination of stand-alone and integrated packages. It is very likely, over the next year or two, that vendors will continue to migrate their most powerful processing to their integrated products, but they are not there yet," Mendel said.

Ms. Hayes is feature editor of Diagnostic Imaging.

---

Tips for assessing advanced workstations

When in the market for a new product, ask a few basic questions

Catalogue resources

What software is already available on your modalities, PACS, and stand-alone workstations?

Is it possible to upgrade legacy workstations?

Assess needs

Who needs to use specific software packages?

Do you need Web access?

Will your technologists perform the processing?

Choose the right vendor

Which software runs in a PACS environment?

Is the software user-friendly in your PACS environment?

How usable is the 3D Web environment?

Look to the future

What are the future integration plans of your PACS vendor?

Source: J. Mendel, presentation at 8th Annual International Symposium on Multidetector-Row CT in San Francisco, June 2006