- try to put tA in a chroot jail
If you have more or less recent Linux kernel, there is a
much better option exists: LXC (Linux Containers). In fact, it's a matter of one system call to start new process in a isolated area (and there are utilities to do this in convenient way). You can have separate filesystem namespace, separate processes list, etc. In a very recent kernels, containers are even able to use a "fake root" accounts (i.e. it appears as if you were root, but this "root" is only able to manage things in own container). So you can manage it as if it was separate virtual machine, if you want to. Or you can start just isolated process. Parts of this technology existed for ages in so called OpenVZ patches (used by dozens of VDS hostings). These days most interesting parts are in mainline kernel, so you don't need to bother using custom patched kernels: if you have recent kernel, it's already here. Btw, Google uses this technique to secure Chromium under Linux. You can see their wrapper source if you're interested: it's quite small and clean thing and definitely adds up to their defence. And it's much more secured than chroot.
Pros of LXCs:
+ Much better isolation than dumb chroot. Probably even better than SELinux (which is being disabled by each and every public kernel exploit anyway). Chroot was
never designed as security feature so it may allow certain ways to break out of jail, especially if attacker has managed to elevate himself to root.
+ No speed overhead. Unlike with a fully blown hypervisor which is doing a full virtualisation of hardware (vmware, xen, kvm, ...), there is very little overhead and speed you will get is virtually same as if you were running directly on "host". So you can get a bare metal speed while be able to use some virtualisation-like features and enjoy by decent processes separation (you can even limit resources separately). The worst case speed penalty is just 2-5% or so. So you would not notice it at all.
+ Not so hard to setup. Modern kernels are already coming with it, so it's a matter of installing few tools (usually from default OS repos). IMO it's a way more fun that dealing with a crappy SELinux policies etc and leads to even better result: we instruct kernel to keep certain processes isolated from other parts of OS in a quite efficient and predictable way which is IMO easier to supervise than numerous ACLs and policies (which are hard to keep in brain so there is risk of mistakes).
+ You can manage it either as fully blown OS installation (up to having own IP for that "guest os", root account and custom set of packages) or just use it as advanced chroot-like approach. managing things from host.
+ You can control how many resources all this could consume.
+ In fact, there is only 1 copy of OS kernel booted for all. So memory overhead is not as high as with full hypervisors running truly separate copies of OSes and their kernels.
+ Minimalistic setup which only runs one or few services is easy to monitor for changes. If you're paranoid, you can try to put web server and eA into different containers, leaving communications between 'em via network only and don't leave access to host IP at all from their side. Since overhead is quite low, it's affordable. It would not work out of the box but shouldn't be a rocket science either.
+ It's easy to place some traps in containers without chance to suffer yourself, since you can use cheat: host could access all resources and you can do your jobs from the host side ("supergod mode").
+ You may use checkpoints for fun and profit.
Cons of LXCs:
- Still, you have to spent some time and effort to configure it and get familiar with this technology.
- Some things will not run out of the box, but as long as you're familiar with chroots and overall system management, that shouldn't be a serious problem.
- You can't use "level 5 system magic" known as "live migration" yet. Doing so would require either fully blown hypervisor or at least messing up with OpenVZ kernels.
- Still, full hypervisor which virtualises hardware would offer somewhat better protection (at the cost of speed). Say, in Xen it's most critical key part, the hypervisor who manages resources access and who haves total access to everything is quite small in terms of code size. This means it's very likely there are fewer bugs in xen's hypervisor than in something larger like Linux kernel. Sure, there are still chances there will be bugs (and they were found in the past). But the smaller the code, the fewer bugs you could expect. Disadvantage is obvious: full hypervisors imply serious speed penalty. You can get 90% of computational speed of CPU quite easily. But when it comes to massive IO, full hypervisors may deliver you some headache. Disk I/O or networking could cause much load on host's CPU and could be much slower than "bare metal" I/O. And servers are about to making lots of I/O, unfortunately.
Some hints if you will try virtualization or containers:
1) Never allow administrative network access to "host" itself (physical machine running containers/VMs) to anyone but yourself. Assign separate IPs to VMs/containers running guest and make only them available into public internet, not host itself

. This may require certain port forwarding, NATing and firewalling, depending on how many routable IPs you can afford and what's the setup.. Once host hacked, every guest on it could be hacked and you have a problem. On other hand, if host is not accessible via network, there is almost nothing to hack. And hacked container/VM isn't a really huge issue. At least it's much easier and faster to re-create container/VM or rollback to clean good snapshot than perform full OS reinstall on real hardware. And it's easier to detect

.
2) I believe the best defence is a lightning bolt, popping out of nowhere at enemy head

. Have you heard about rootkits? With virtual techs you can make similar approach working on your side, striking hackers from nowhere when they do not expect it. Look, hacker can't see host processes. And full filesystem. But you can monitor all container activity in invisible way from the host. You can check say, md5 sums of all container's files in background or catch all FS accesses in your monitor, keeping checksums, logs and monitors outside of container view. Hacker would neither see monitor(s) in processes lists nor would be able to kill them it or erase logs. As for me, it's funny enough.
3) Hey, have I told that host could conduct administrative actions on guests from it's side? It can access all files from his side, inject new processes onto guests, etc. This allows to make some key system utilities in guest containers useless stubs who rather serves as trap. You want ls? Uname? Something else? Wow, cool. But under normal scenario web server's process does not needs them, isn't it? And legitimate admin could do ls on full FS view from the host side. So if we would replace ls and uname with trap which signals to host that container is compromised (then host may shutdown container and send alarm), we can have some fun, looking how hackers are reporting self and starting shutdown sequence (maybe with checkpointing, to study system state and hacker's actions later). And the best of all, hackers can't easily harm all these sequences as host activity is invisible for them. They would only notice that something went wrong only when container is shut down, admin alerted and it's too late. I'm thinking it's funny enough to use their most advanced techniques similar to "rootkits" against 'em

.
Note: that's only my own view and since I deal with virtual environments I can imagine some interesting ways to use them. It does not provides perfect security but seriously raises the bar.
The overall idea is: imagine that you're hacker. Imagine that you want to hack that host. Then imagine your worst fears and worst surprises you could expect on the way. Then implement them and it will be a good chance that hackers would be unhappy to eat this as well

.