« the sleeping giant awakes - more on Microsoft's desktop virtualization announcement | Main | Novell Says NO! - No Surprise »

March 21, 2010



From a scalability point of view it seems that letting the graphics processing happen on the client makes more sense. Other than lack of processing power on the client, what is a benefit of doing 2d/3d processing on a server?

However, if server vendors wanted to have the graphics on the server, it seems that Intel/AMD is best positioned to do so by giving up some CPU cores for graphics cores (like Intel is already doing with Atom/Pineview).

Nik Simpson

I suspect it's probably about two things,

Network bandwidth: If doing this on the server side means that you don't need as much bandwidth to the client then it may open up additional client side scenarios

Low-end clients: Doing everything on the client side is fine if the client has the processing capability to support it.

As to the use of graphics cores on the CPU, I suspect the problem is that you need a lot more "grunt" from the graphics engine to make it worthwhile. The graphics core on current CPUs is pretty low-end in terms of performance, and also sucks up system memory which would otherwise be available for supporting more remote desktops.


Graphic cards are also good on computing calculation and processing, and somehow it's quite greater than the processor on synchronous processing, as far as I remember

Nik Simpson

Some useful stuff here


Seems that hardware acceleration at the host in the form of graphics cards or purpose built RemoteFX accelerators is a requirement.


Most of servers really dont have fancy video card and I guess its not needed. :)


I'm coming at this from the user perspective and thought I'd throw in my 2 cents. I work with fluid modeling software known as (ANSYS CFX) and I share an HPC with others in my group. We use every bit of 8 processors plus gigs of memory to simulate fluid flow. The files produced end up on the order of 2-3 gigs. As is the nature of simulation, the design process is iterative. We model what seems like it will work, we see weak points, we improve the design, we resimulate, etc. The rub is, we'd rather not transfer 2 plus gigs back and forth with each iteration. Instead, we'd like to Remote Desktop to the server and make changes then rerun. 3D models and the simulation files can be very graphics intensive. This means the server needs graphics card performance and the RD client needs smooth video streaming. So I say woohoo! to those furthering these technologies.
I'm a small nitch in the server market perhaps, but as fluid and solid modeling becomes more mainstream I'm sure these needs will proliferate.


Physical x16 slots should be enough. In fact running physical x16 video cards in x8 mode is already common in some multiple video card configs on the desktops.

The comments to this entry are closed.

  • Burton Group Free Resources Stay Connected Stay Connected Stay Connected Stay Connected

Blog powered by Typepad