All the resources available at the Computing Centre are available to its users via virtual machines accessible via remote desktop connection using terminal or graphical interface (Remote Desktop Connection Manager (RDCMan or RDP), Putty, VNC or FreeNX). The virtual machine provided to the user is a full-fledged researcher’s workplace based upon selected operational system (Windows or Linux), that allows development and tuning of user-created programs, perform computations and access the wide range of scientific and development software. Files located in the user’s home directory on a virtual machine are accessible for processing at any and all physical resources of the centre. The system also allows to create groups of virtual machines sharing common disc space for use by scientific groups or University subdivisions.

The user applying to the Computing Centre can either request one of the standard researcher’s workplace configurations that fits his scientific needs the most or order the new configuration developed by CC personnel for his specific research.
All the hardware hosted within the Computing Centre including servers, clusters and storage systems is consolidated within a single heterogeneous complex to achieve higher reliability, accessibility, and flexibility of its use while lowering the maintenance and support costs.
The complex is covered by the united management system that controls:
• authentication and authorization
• data storage and protection
• administration, monitoring and management
• registration and assess
• virtualization
• software licenses use

At the core of the complex lies the virtualization system built using VMWare. Virtualization system of the Computing Centre allows to support up to 1500 basic virtual machines each having 2-6 cores and 4-24 GB RAM.
Data storage and protection system allows to mount virtual discs to those basic machines that house users home directories and shared folders with all the software products available to CC users.
Data from users directories can be transferred for processing to any physical device that belongs to the Complex is the rights of access for a particular user allow that devise use.
Virtualization system functioning is backed by 60 12-core Hewlett Packard servers with 96 GB of RAM per server.
If the user’s research requires more extensive computing power than a virtual machine might grant, they have a possibility to extend its productivity by using additional computing platforms.


HPC – platform:
• 48 nodes 2xIntel Intel Xeon E5335, 2.0 GHz,
8 cores, RAM 16 GB, ib 20Gbps, Eth 1 Gbps, 3.07 TFlops;
• 20 nodes 2xIntel Xeon E5-2680v3 2.66 GHz, max 3.06 GHz, 24 cores, RAM 128 GB, ib 56 Gbps, Eth 10 Gbps, 7 TFlops.

GPGPU – platform:
• 16 nodes 2xIntel Xeon X5650 2.66 GHz, max06 GHz,12 cores,
3x nVidia TeslaM 2050, RAM 96 Gb, ib 40 Gbps, Eth 10 Gbps, 11.1 GFlops;
• 8 nodes 2xIntel Xeon X5650 2.66 GHz, max 3.06 GHz, 12 cores,
8x nVidia TeslaM 2050, RAM 96 Gb, ib 40 Gbps, Eth 10 Gbps, 16.5 TFlops;
• 10 nodes 2xIntel Xeon E5-2680v3 2.66 GHz, max 3.06 GHz,
24 cores, 2x nVidia Tesla K40, 128 GB, ib 56 Gbps, Eth 10 Gbps, 21.6TFlops.

SMP - platform:
• 2 servers DL980 (SMP1, SMP3) 8xIntel Xeon X7560 2.266GHz, max266GHz, 64 cores, 0.5 ТB, 1.02 TFlops;
• 1 server DL980 (SMP2) 8xIntel Xeon X7560 2.266GHz, max 266GHz,
64 cores, 2.0 ТB, 0.52 TFlops;
• 1 server Tecal RH5885V3 (NODE 32) 4xIntel Xeon E7-4880v2, 2.5 GHz, max1 GHz, 60 cores, 1.0 ТB.

Access to all the additional platforms is granted via a system of queues with the limitations set for time of continuous use.