AI – FASRC DOCS https://docs.rc.fas.harvard.edu Thu, 22 May 2025 14:56:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://docs.rc.fas.harvard.edu/wp-content/uploads/2018/08/fasrc_64x64.png AI – FASRC DOCS https://docs.rc.fas.harvard.edu 32 32 172380571 KNIME on the FASRC clusters https://docs.rc.fas.harvard.edu/kb/knime-on-the-fasrc-clusters/ Wed, 21 May 2025 15:23:14 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=28725 Description

KNIME is an open-source data analytics, reporting, and integration platform that is meant to perform various aspects of machine-learning & data mining through its modular data pipelining concept. The platform offers a way to integrate various tasks ranging from developing analytic models to deploying them and sharing insights with your team. The KNIME Analytics Platform offers users 300+ connectors to multiple data sources and integrations to all popular machine learning libraries.

The software’s key capabilities include Data Access & Transformation, Data Analytics, Visualization & Reporting, Statistics & Machine Learning, Generative AI, Collaboration, Governance, Data Apps, Automation, AI Agents

Given KNIME’s wide scale use and applicability, we have converted it into a system-wide module that can be loaded from anywhere on any of the FASRC clusters, Cannon or FASSE. Additionally, we have packaged it as an app that can be launched using the cluster web interface, Open on Demand (OOD).

KNIME as a module

Knime is available as a module on the FASRC clusters. In order to know more about the module including the versions available and how to load one of them, execute from a terminal on the cluster: module spider knime

This would pull up the information on the versions of KNIME software that are available to load. For example, for a user jharvard on a compute node, the module spider command would produce the following output:


[jharvard@holy8a26602 ~]$ module spider knime/
knime:
Description:
An open-source data analytics, reporting, and integration platform meant to perform various aspects of machine-learning & data mining through its modular data pipelining concept.

Versions:
knime/5.4.3-fasrc01
knime/5.4.4-fasrc01

For detailed information about a specific "knime" package (including how to load the modules) use the module's full name.

Note that names that have a trailing (E) are extensions provided by other modules.

For example:
$ module spider knime/5.4.4-fasrc01


To load a specific module, one can execute: module load knime/5.4.3-fasrc01

Or, to load the default & typically the latest module, one can run: module load knime command. This would result in, e.g.:

[jharvard@holy8a26602 ~]$ module load knime
[jharvard@holy8a26602 ~]$ module list
Currently Loaded Modules:
  1) knime/5.4.4-fasrc01

Once the knime module is loaded, one can launch the GUI by running the knime executable on the terminal provided you ssh into the cluster using X11 forwarding, preferably with the -Y option, and that XQuartz (MacOS) or  MobaXterm (Windows) is installed on your local device that is being used to login to the cluster. For example:

ssh -Y jharvard@login.rc.fas.harvard.edu

[jharvard@holylogin05 ~]$ salloc -p test --x11 --time=2:00:00 --mem=4g
[jharvard@holy8a26602 ~]$ module load knime
[jharvard@holy8a26602 ~]$ knime

One can ignore the following libGL errors and should expect to see a GUI appear as shown in the screen shot below.
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast

Knime GUI launched directly on Cannon

Note: While you can launch KNIME directly on the cluster using X11 forwarding, it is laggy and doesn’t render itself well to faster executions that might be needed for certain KNIME workflows. To avoid issues associated with X11 forwarding, we recommend launching KNIME using OOD.

Both these modules are also available to use via the Knime OOD app, as explained below.

KNIME on OOD

KNIME can be run from Open OnDemand (OOD, formerly known as VDI) by choosing it from the Interactive Apps menu, and specifying your resource needs. Hit Launch, wait for the session to start, and click the “Launch Knime” button.

You can also launch KNIME from the Remote Desktop app on OOD.

Pre-installed Extensions

Both KNIME modules come with the following pre-installed extensions:

  1. For GIS: Geospatial Analytics Extension for KNIME
  2. For Programming:
    KNIME Python Integration
    KNIME Interactive R Statistics Integration
  3. For Machine Learning:
    KNIME H2O Machine Learning Integration
    KNIME XGBoost Integration
    KNIME Machine Learning Interpretability Extension
  4. For OpenAI, Hugging Face, and other LLMs: KNIME AI Extension
  5. For AI Assistant Coding: KNIME AI Assistant (Labs)
  6. For Google Drive Integration: KNIME Google Connectors

Note: New extensions cannot be installed by the users on the fly as modules don’t come with write permissions.

KNIME Tutorial

The link here takes you to the KNIME tutorial that has been prepared by Lingbo Liu from Harvard’s Center for Geographic Analysis (CGA). This tutorial is best executed by launching the Knime app on OOD.

]]>
28725
OpenAI https://docs.rc.fas.harvard.edu/kb/openai/ Fri, 22 Nov 2024 19:04:55 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=27996 Description

See OpenAI website and documentation.

Security

Please, carefully read Harvard’s AI guidelines and Generative AI tool comparision.  See our FASRC Guidelines for OpenAI Key and Harvard Agreement.

You can only use openAI and other genAI on your own is if it is not Harvard work related and/or public, non-sensitive data (data security level 1).

For data security levels 2 and 3, you need to work with your school to discuss your needs. It is your responsibility to make sure you get setup with the appropriate contractual coverage and environment — esp. to avoid having the model learn from your input and leak sensitive information.

Installation

You can install OpenAI in a conda/mamba environment:

[jharvard@boslogin01 ~]$ salloc --partition test --time 01:00:00 --mem-per-cpu 4G -c 2
[jharvard@holy8a24301 ~]$ module load python/3.10.12-fasrc01
[jharvard@holy8a24301 ~]$ export PYTHONNOUSERSITE=yes
[jharvard@holy8a24301 ~]$ mamba create -n openai_env openai

Run OpenAI

You will need to provide an OpenAI key. You can generate one from
https://platform.openai.com/api-keys.


# Request an interactive job
[jharvard@boslogin01 ~]$ salloc --partition test --time 01:00:00 --mem-per-cpu 4G -c 2
# Source conda environment
[jharvard@holy8a24301 ~]$ mamba activate openai_env
# replace my_key with the key that you generated on OpenAI's website
[jharvard@holy8a24301 ~]$ export OPENAI_API_KEY='my_key'
# set SSL_CERT_FILE with system's certificate
(openai_env) [jharvard@holy8a24301 ~]$ export SSL_CERT_FILE='/etc/pki/tls/certs/ca-bundle.crt'
# run OpenAI example
(openai_env) [jharvard@holy8a24301 ~]$ python openai-test.py

Note: OpenAI uses the python package httpx. You must set the variable SSL_CERT_FILE to use the system’s certificate. If you do not set SSL_CERT_FILE OpenAI will give this error:


ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)

Examples

See User Codes on OpenAI including an example on OpenAI Whisper.

Resources

 

]]>
27996
HeavyAI https://docs.rc.fas.harvard.edu/kb/heavyai/ Tue, 05 Nov 2024 19:31:10 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=27885 What is HeavyAI?

See HeavyAI website.

This software was formerly known as OmniSci.

HeavyAI in the FASRC Cannon cluster

HeavyAI is implemented on the FASRC cluster using Singularity.

We recommend carefully reading HeavyAI hardware recommendation as they provide details about number of cores and RAM (memory) that you should request. They also recommend using SSD storage, which is available through local scratch.

You may request specific GPU cards on the FASRC clusters using the --constraint flag. See our Job Constraints documentation for more details.

Examples

Our recommendation is that you use HeavyAI through the Open On Demand interface via a VPN connection.  Also, we have learning sessions for Open on Demand..

Should you need the command line interface, User Codes is an example on how to do so..

Note: You will need to provide your own license key to use HeavyAI. FASRC does not provide a license. You can request a free version on HeavyAI downloads page. Note that the free license only allows limited computational resources.

Resources

  • Support Portal: The HEAVY.AI Support Portal offers a knowledge base, FAQs, troubleshooting resources, and access to community discussions, providing valuable assistance for academic users.
  • Resource Center: The HEAVY.AI Resource Center features whitepapers, solution briefs, case studies, and videos that can be beneficial for academic research and teaching purposes.
]]>
27885
Claude https://docs.rc.fas.harvard.edu/kb/claude/ Mon, 23 Sep 2024 21:19:55 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=27676 Description

See Claude on the website and documentation.

Security

Please, carefully read Harvard’s AI guidelines and Generative AI tool comparison.

You can only use Anthropic tools and other genAI models on non-sensitive data (data security level 1) public data on Cannon.

For data security levels 2 and 3, you need to work with your school to discuss your needs. It is your responsibility to make sure you get setup with the appropriate contractual coverage and environment — esp. to avoid having the model learn from your input and leak sensitive information.

Installation

You can install Claude in a conda/mamba environment

Here is a quick script to install Claude:

#!/bin/bash

salloc --partition=test --time=02:00:00 --mem=8G --cpus-per-task=2
module load python
export PYTHONNOUSERSITE=yes
mamba create --name claude_env python -y
source activate claude_env
pip install anthropic
conda deactivate

Running Claude

You will need to provide an Anthropic API key. You can generate one from their API page. Also, see their quickstart guide.

Examples

See FASRC User Codes repo for example Claude scripts.

See also Anthropic’s own Anthropic Cookbooks.

Resources

Anthropic Quickstarts: A collection of projects designed to help developers quickly get started with building applications using the Anthropic API. Each quickstart provides a foundation that you can easily build upon and customize for your specific needs.

Anthropic Official Documentation: Comprehensive guide to using Claude, including setup, API usage, and troubleshooting.

Claude AI API Access: Portal to access the Claude API, set up API keys, and manage your integrations.

Claude’s Capabilities and Model Family: Learn about Claude’s different models such as Sonnet and Haiku, tailored for various performance needs.

]]>
27676
PyTorch https://docs.rc.fas.harvard.edu/kb/pytorch/ Fri, 20 Sep 2024 23:53:59 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=27662 Description

PyTorch, developed by Facebook’s AI Research lab, is an open-source machine learning library that offers a flexible platform for building deep learning models. It features a Python front end and integrates seamlessly with Python libraries like NumPy, SciPy, and Cython to extend its functionality. Unique for its use of dynamic computational graphs, unlike TensorFlow’s static graphs, PyTorch allows for greater flexibility in model design. This is particularly advantageous for research applications involving novel architectures.

The library supports GPU acceleration, enhancing performance significantly, which is vital for tackling high-level research tasks in areas such as climate change modeling, DNA sequence analysis, and AI research that involve large datasets and complex architectures. Automatic differentiation in PyTorch is handled through a tape-based system at both the functional and neural network layers, offering both speed and flexibility as a deep learning framework.

Installing PyTorch

These instructions are intended to help you install PyTorch on the FASRC cluster.

GPU Support:  For general information on running GPU jobs refer to our user documentation. To set up PyTorch with GPU support in your user environment, please follow the below steps:

PyTorch with CUDA 12.1 in a conda environment

These instructions set up a conda environment with PyTorch version 2.2.1 and CUDA version 12.1, where the cuda-toolkit is installed directly in the conda environment.

Start an interactive job requesting GPUs, e.g., (Note: you will want to start a session on the same type of hardware as what you will run on)

salloc -p gpu -t 0-06:00 --mem=8000 --gres=gpu:1

Load required software modules, e.g.,

module load python/3.10.13-fasrc01

Create a conda environment, e.g.,

mamba create -n pt2.3.0_cuda12.1 python=3.10 pip wheel

Activate the new conda environment:

source activate pt2.3.0_cuda12.1

Install cuda-toolkit version 12.1.0 with mamba

mamba install -c  "nvidia/label/cuda-12.1.0" cuda-toolkit=12.1.0

Install PyTorch with mamba

mamba install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

Install additional Python packages, if needed, e.g.,

mamba install -c conda-forge numpy scipy pandas matplotlib seaborn h5py jupyterlab jupyterlab-spellchecker scikit-learn

PyTorch with CUDA 11.8 from a software module

These instructions set up a conda environment with PyTorch version 2.2.0 and CUDA version 11.8, where CUDA is loaded as a software module, cuda/11.8.0-fasrc01

# Start an interactive job on a GPU node (target the architecture where you plan to run), e.g.,
salloc -p gpu -t 0-06:00 --mem=8000 --gres=gpu:1

# Load the required modules, e.g.,
module load python 
module load cuda/11.8.0-fasrc01 # CUDA version 11.8.0

# Create a conda environment and activate it, e.g.,
mamba create -n pt2.2.0_cuda11.8 python=3.10 pip wheel -y
source activate pt2.2.0_cuda11.8

# Install PyTorch
mamba install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia

# Install additional packages, e.g.,
mamba install pandas scikit-learn matplotlib seaborn jupyterlab -y

Installing PyG (torch geometry)

After you create the conda environment pt2.3.0_cuda12.1 and activated it, you can install PyG in your environment with the command:

(pt2.3.0_cuda12.1) [username@holygpu7c26103 ~]$ mamba install pyg -c pyg

Running PyTorch:

If you are running PyTorch on GPU with multi-instance GPU (MIG) mode on (e.g. gpu_test partition), see PyTorch on MIG mode

PyTorch checks

You can run the following tests to ensure that PyTorch was installed properly and can find the GPU card. Example output of PyTorch checks:

(pt2.3.0_cuda12.1_v0) [jharvard@holygpu7c26106 ~]$ python -c 'import torch;print(torch.__version__)'
2.3.0
(pt2.3.0_cuda12.1_v0) [jharvard@holygpu7c26106 ~]$ python -c 'import torch;print(torch.cuda.is_available())'
True
(pt2.3.0_cuda12.1_v0) [jharvard@holygpu7c26106 ~]$ python -c 'import torch;print(torch.cuda.device_count())'
1
(pt2.3.0_cuda12.1_v0) [jharvard@holygpu7c26106 ~]$ python -c 'import torch;print(torch.cuda.current_device())'
0
(pt2.3.0_cuda12.1_v0) [jharvard@holygpu7c26106 ~]$ python -c 'import torch;print(torch.cuda.device(0))'
<torch.cuda.device object at 0x14942e6579d0>
(pt2.3.0_cuda12.1_v0) [jharvard@holygpu7c26106 ~]$ python -c 'import torch;print(torch.cuda.get_device_name(0))'
NVIDIA A100-SXM4-40GB MIG 3g.20gb

Run PyTorch Interactively

For an interactive session to work with the GPUs you can use following:

salloc -p gpu -t 0-06:00 --mem=8000 --gres=gpu:1

Load required software modules and source your PyTorch conda environment.

[username@holygpu7c26103 ~]$ module load python/3.10.12-fasrc01
[username@holygpu7c26103 ~]$ source activate pt2.3.0_cuda12.1
(pt2.3.0_cuda12.1) [username@holygpu7c26103 ~]$

Test PyTorch interactively:

(pt2.3.0_cuda12.1) [username@holygpu7c26103 ~]$ python check_gpu.py
Using device: cuda

NVIDIA A100-SXM4-40GB
Memory Usage:
Allocated: 0.0 GB
Reserved:  0.0 GB

tensor([[-2.3792, -1.2330, -0.5143,  0.5844]], device='cuda:0')

check_gpu.py: checks if GPUs are available and if available sets up the device to use them.

Run PyTorch with Batch Jobs

An example batch-job submission script is included below:

#!/bin/bash
#SBATCH -c 1
#SBATCH -N 1
#SBATCH -t 0-00:30
#SBATCH -p gpu
#SBATCH --gres=gpu:1
#SBATCH --mem=4G
#SBATCH -o pytorch_%j.out 
#SBATCH -e pytorch_%j.err 

# Load software modules and source conda environment
module load python/3.10.12-fasrc01
source activate pt2.3.0_cuda12.1

# Run program
srun -c 1 --gres=gpu:1 python check_gpu.py

If you name the above batch-job submission script run.sbatch, for instance, the job is submitted with:

sbatch run.sbatch

PyTorch and Jupyter Notebook on Open OnDemand

If you would like to use the PyTorch environment on Open OnDemand/VDI, you will also need to install packages ipykernel and ipywidgets with the following commands:

(pt2.3.0_cuda12.1) [username@holygpu7c26103 ~]$ mamba install ipykernel ipywidgets

PyTorch on MIG Mode

Note: Currently only the gpu_test partition has MIG mode enabled.

# Get GPU card name
nvidia-smi -L

# Set CUDA_VISIBLE_DEVICES with the MIG instance
export CUDA_VISIBLE_DEVICES=MIG-5b36b802-0ab0-5f37-af2d-ac23f40ef62d

Or automate the process with:

export CUDA_VISIBLE_DEVICES=$(nvidia-smi -L | awk '/MIG/ {gsub(/[()]/,"");print $NF}')

Best Practices

PyTorch and Jupyter Notebook on Open OnDemand

To use PyTorch in Jupyter Notebook on Open OnDemand/VDI, install ipykernel and ipywidgets:

mamba install ipykernel ipywidgets

Pull a PyTorch Singularity Container

Alternatively, you can pull and use a PyTorch singularity container:

singularity pull docker://pytorch/pytorch:2.1.0-cuda12.1-cudnn8-runtime

Other PyTorch/cuda versions

To install other versions, refer to the PyTorch compatibility chart:

Examples

For example scripts covering installation, and use cases, see our User Codes > AI > PyTorch repo.

External Resources:

]]>
27662