System Requirements
Supported Systems
- Windows
- Mac
- Linux
- GPGPU
Windows Supported Operating Systems
-
Windows 11
We support English plus internationalized versions of the OS in Spanish, French, German, and Japanese.
All workflows that rely on Desmond are not supported on Windows or Mac platforms, they can only be run on Linux. This includes Molecular Dynamics, IFD-MD, FEP+, WaterMap, and a number of Materials Science workflows.
GPU machine learning applications such as Active Learning Glide and DeepAutoQSAR on GPU can only be run on Linux.
Timeline
We aim to provide support for new operating system versions 3 months after their public release.
Support cannot be provided once an OS platform version has reached "end of life" (EOL). Check with your platform provider for EOL information.
Upcoming Changes
As of 2025-4 Schrödinger software is using Job Server by default for submitting and managing computational jobs. This new infrastructure replaced the legacy Job Control system to provide a more robust, secure, and scalable solution for modern computing environments. Customers who currently use Job Control will need to deploy and configure Job Server.
To view a list of recent infrastructure changes that may require changes from your IT team click here.
Mac Supported Operating Systems
-
MacOS Tahoe (26)
-
MacOS Sequoia (15)
-
MacOS Sonoma (14)
All workflows that rely on Desmond are not supported on Windows or Mac platforms, they can only be run on Linux. This includes Molecular Dynamics, IFD-MD, FEP+, WaterMap, and a number of Materials Science workflows.
GPU machine learning applications such as Active Learning Glide and DeepAutoQSAR on GPU can only be run on Linux.
Timeline
We aim to provide support for new operating system versions 3 months after their public release.
Support cannot be provided once an OS platform version has reached "end of life" (EOL). Check with your platform provider for EOL information.
Upcoming Changes
As of 2025-4 Schrödinger software is using Job Server by default for submitting and managing computational jobs. This new infrastructure replaced the legacy Job Control system to provide a more robust, secure, and scalable solution for modern computing environments. Customers who currently use Job Control will need to deploy and configure Job Server.
To view a list of recent infrastructure changes that may require changes from your IT team click here.
Linux Supported Operating Systems
-
RedHat Enterprise Linux (RHEL) 8.10, 9.4, 9.6, 10.0
Please make sure the listed packages are installed:
sudo yum/dnf install <lib>
-
Rocky Linux 8.10, 9.4, 9.6, 10.0
Please make sure the listed packages are installed:
sudo yum/dnf install <lib>
-
Ubuntu 22.04 LTS and 24.04 LTS
Please make sure the listed packages are installed:
sudo apt-get install <lib>
All supported distributions have a glibc of 2.28 or greater.
If using NFS, file locking must be enabled.
Timeline
We aim to provide support for new operating system versions
Support cannot be provided once an OS platform version has reached "end of life" (EOL). Check with your platform provider for EOL information.
Upcoming Changes
As of 2025-4 Schrödinger software is using Job Server by default for submitting and managing computational jobs. This new infrastructure replaced the legacy Job Control system to provide a more robust, secure, and scalable solution for modern computing environments. Customers who currently use Job Control will need to deploy and configure Job Server.
To view a list of recent infrastructure changes that may require changes from your IT team click here.
GPGPU
We support the following NVIDIA solutions:
| Architecture | Server / HPC | Workstation |
| Pascal |
Tesla P40 Tesla P100 |
Quadro P5000 |
| Volta |
Tesla V100 |
|
| Turing |
Tesla T4 |
Quadro RTX 5000 |
| Ampere |
A100 |
RTX A4000 RTX A5000 |
| Ada Lovelace |
L4 |
RTX 4000 SFF Ada RTX 2000 Ada |
| Hopper |
H100 |
|
| Blackwell |
B200 (SXM) |
RTX PRO 6000 Blackwell Workstation |
Unless otherwise specified, we only support and test on the PCIe variant of the cards listed above.
To check the compute capability of NVIDIA cards, see NVIDIA CUDA GPU Compute Capability.
Supported Linux drivers
- We support only the NVIDIA recommended / certified / production branch' Linux drivers for these cards with minimum CUDA version 12.0. Download from NVIDIA's Drivers webpage.
Supported Multi-Instance GPU (MIG)
- We support NVIDIA’s Multi-Instance GPU (MIG) feature. Please make sure that MIG-enabled GPUs are being used in conjunction with a queueing system which supports this feature, see How do I make use of Multi-instance GPUs (MIG).
Pre-configured Schrödinger compatible GPU boxes
- For information on pre-configured Schrödinger compatible GPU boxes see this article.
Notes
- Standard support does not cover consumer-level GPU cards such as GeForce GTX cards. Learn more about our rigorous validation process and why we exclusively support professional-grade NVIDIA hardware in this article.
- If you already have another NVIDIA GPGPU and would like to know if we have experience with it, please contact our support at help@schrodinger.com.
Hardware Requirements
| Required |
|
|
| Processor (CPU) |
x86_64 compatible processor (Apple silicon M-series processors are supported) |
For large jobs, computing on a cluster with a queueing system is recommended, with the following hardware components:
|
| System memory (RAM) |
|
RAM is not related to the input file size, only the disk space is related. |
| Disk space |
|
60 GB minimum scratch disk space for running jobs Faster local disk access is important for jobs that read a lot of data. For example, using SSD, a disk with a higher speed (e.g. 10000 rpm), or a disk array that uses multiple controllers and striping can be beneficial. Local disks are preferred over networked disks for temporary storage (or for data that is used often) because networked disks are affected by network access, bandwidth, and network traffic. |
Supported Queuing Systems
To run jobs on a remote host, a queuing system is required.
Supported Features
| Queuing system | License Checking | Native GPU support version |
| PBS Pro |
Yes |
11.0 |
| LSF |
Yes |
9.1 |
| Univa Grid Engine |
Yes |
8.1 |
| Sun Grid Engine, Open Grid Scheduler |
Yes |
None |
| Torque |
No |
2.5.4 |
| Slurm |
Yes (Slurm version >= 18.08.2) |
2.2 |
Extra considerations for visualization and interacting with Maestro
GPU
We recommend using a graphics card that supports hardware-accelerated OpenGL with at least 1GB onboard memory and an up-to-date vendor-supplied graphics driver.
Network file share
Using a networked file share mounted via CIFS (Samba) is not recommended, as Maestro projects use SQLite databases that have locking dependencies not typically available on them.
Mouse
We recommend using a 3-button mouse with a scroll wheel. Learn more about how to customize the actions performed by the mouse buttons and wheel in the
Requirements for working from home
Find best practices on how to work from home with the Schrödinger Suite.
3D stereo display
For a 3D stereo display, we recommend Looking Glass Monitors.
Remote display
A local installation of Maestro on your laptop or workstation will always run better than running it from a remote compute resource. If you need to run Maestro remotely, various protocols exist for virtual desktops with varying degrees of compatibility with OpenGL, Qt, and other graphics dependencies that Maestro has. Support for such protocols is outside of our control.
We strongly recommend against running Maestro via basic X11 forwarding, e.g., via “ssh -X workstation”, as the performance will be poor.
Coupling an up-to-date version VirtualGL (www.virtualgl.org) with a remote desktop protocol can make a remote session almost as responsive as a local environment. Once installed, you will launch Maestro as:
vglrun $SCHRODINGER/maestro
Typical “compute nodes” on HPC environments often do not have high quality graphics cards and are not optimal for running Maestro; their graphics hardware is often dedicated to GPU computing.