Solaris got bad press in the early days, when "Slowaris" was a common nickname.
In very early releases, this was well deserved. Most of the major performance hits have been fixed. And Solaris is really very fast as a server - fast filesystems, I/O, and networking.
Where Solaris isn't so fast is as a desktop, development, and scripting platform. However, a lot of this isn't intrinsic to Solaris, but due to poor design or implementation of applications. For an example, see making man faster.
Also remember that when it comes down to a choice between fast and safe, Solaris will always play it safe.
That isn't to say that you can't make Solaris, and applications running on it, feel a lot faster.
Process creation under Solaris isn't the fastest thing it does. So there are several things you can do to make it go much faster.
The first, and most obvious, is to simply reduce the number of processes a shell script launches. As an example, if you take the standard mozilla startup script shipped with Solaris 10 and use it to start up mozilla, it invokes about 60 processes along the way. Most of these are to do with working out what platform it's on, where mozilla is installed, and the like. Going through the script (actually, there are 3) and simply hardcoding the values (after all, you know where it's installed and what platform it's running on) makes a noticeable difference to performance.
Another possibility is to use a more advanced shell, such as ksh instead of sh. With ksh, you can do arithmetic, and can sometimes make code much cleaner, and faster - enough to outweigh the slight extra cost of launching ksh.
Another possibility is to use a program instead of a shell script. For example, if you use ./configure, make sure you have ginstall in your path, as it will pick that rather than faking it with install.sh, and the difference in installation speed is considerable.
Also, use exec if you can, rather than launching the child process and keeping the parent waiting. This avoids having to create a new process, doesn't waste the resources used by the old process, and doesn't have bad interactions with the scheduler that can really hurt performance.
If you run gnome, you needs lots of memory. The same goes for StarOffice, java, mozilla. We're looking at a gigabyte minimum for desktop machines.
It seems odd to me that a simple terminal emulator can be so big and suck up so many resources, but they seem to be getting fatter all the time. Do you really need all the extra bells and whistles of gnome-terminal? What's wrong with xterm anyway?
Actually, there are massive performance problems in gnome-terminal. This is most obvious when you have something - like ls for example - that prints out line by line (things like vmstat are also very bad). It's horrifically slow and thrashes the system. This has been fixed in later builds of S10.