AI – FASRC DOCS https://docs.rc.fas.harvard.edu Sat, 23 Nov 2024 21:16:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://docs.rc.fas.harvard.edu/wp-content/uploads/2018/08/fasrc_64x64.png AI – FASRC DOCS https://docs.rc.fas.harvard.edu 32 32 172380571 OpenAI https://docs.rc.fas.harvard.edu/kb/openai/ Fri, 22 Nov 2024 19:04:55 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=27996 Description

See OpenAI website and documentation.

Security

Please, carefully read Harvard’s AI guidelines and Generative AI tool comparision.  See our FASRC Guidelines for OpenAI Key and Harvard Agreement.

You can only use openAI and other genAI on your own is if it is not Harvard work related and/or public, non-sensitive data (data security level 1).

For data security levels 2 and 3, you need to work with your school to discuss your needs. It is your responsibility to make sure you get setup with the appropriate contractual coverage and environment — esp. to avoid having the model learn from your input and leak sensitive information.

Examples

See User Codes on OpenAI including an example on OpenAI Whisper.

Resources

 

]]>
27996
HeavyAI https://docs.rc.fas.harvard.edu/kb/heavyai/ Tue, 05 Nov 2024 19:31:10 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=27885 What is HeavyAI?

See HeavyAI website.

This software was formerly known as OmniSci.

HeavyAI in the FASRC Cannon cluster

HeavyAI is implemented on the FASRC cluster using Singularity.

We recommend carefully reading HeavyAI hardware recommendation as they provide details about number of cores and RAM (memory) that you should request. They also recommend using SSD storage, which is available through local scratch.

You may request specific GPU cards on the FASRC clusters using the --constraint flag. See our Job Constraints documentation for more details.

Examples

Our recommendation is that you use HeavyAI through the Open On Demand interface via a VPN connection.  Also, we have learning sessions for Open on Demand..

Should you need the command line interface, User Codes is an example on how to do so..

Note: You will need to provide your own license key to use HeavyAI. FASRC does not provide a license. You can request a free version on HeavyAI downloads page. Note that the free license only allows limited computational resources.

Resources

  • Support Portal: The HEAVY.AI Support Portal offers a knowledge base, FAQs, troubleshooting resources, and access to community discussions, providing valuable assistance for academic users.
  • Resource Center: The HEAVY.AI Resource Center features whitepapers, solution briefs, case studies, and videos that can be beneficial for academic research and teaching purposes.
]]>
27885
Claude https://docs.rc.fas.harvard.edu/kb/claude/ Mon, 23 Sep 2024 21:19:55 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=27676 Description

See Claude on the website and documentation.

Security

Please, carefully read Harvard’s AI guidelines and Generative AI tool comparison.

You can only use Anthropic tools and other genAI models on non-sensitive data (data security level 1) public data on Cannon.

For data security levels 2 and 3, you need to work with your school to discuss your needs. It is your responsibility to make sure you get setup with the appropriate contractual coverage and environment — esp. to avoid having the model learn from your input and leak sensitive information.

Installation

You can install Claude in a conda/mamba environment

See FASRC User Codes > AI > Anthropic  git repo for example scripts on installation and running Claude.

Running Claude

You will need to provide an Anthropic API key. You can generate one from their API page. Also, see their quickstart guide.

Examples

See FASRC User Codes repo for example Claude scripts.

See also Anthropic’s own Anthropic Cookbooks.

Resources

Anthropic Quickstarts: A collection of projects designed to help developers quickly get started with building applications using the Anthropic API. Each quickstart provides a foundation that you can easily build upon and customize for your specific needs.

Anthropic Official Documentation: Comprehensive guide to using Claude, including setup, API usage, and troubleshooting.

Claude AI API Access: Portal to access the Claude API, set up API keys, and manage your integrations.

Claude’s Capabilities and Model Family: Learn about Claude’s different models such as Sonnet and Haiku, tailored for various performance needs.

]]>
27676
PyTorch https://docs.rc.fas.harvard.edu/kb/pytorch/ Fri, 20 Sep 2024 23:53:59 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=27662 Description

PyTorch, developed by Facebook’s AI Research lab, is an open-source machine learning library that offers a flexible platform for building deep learning models. It features a Python front end and integrates seamlessly with Python libraries like NumPy, SciPy, and Cython to extend its functionality. Unique for its use of dynamic computational graphs, unlike TensorFlow’s static graphs, PyTorch allows for greater flexibility in model design. This is particularly advantageous for research applications involving novel architectures.

The library supports GPU acceleration, enhancing performance significantly, which is vital for tackling high-level research tasks in areas such as climate change modeling, DNA sequence analysis, and AI research that involve large datasets and complex architectures. Automatic differentiation in PyTorch is handled through a tape-based system at both the functional and neural network layers, offering both speed and flexibility as a deep learning framework.

Best Practices

PyTorch and Jupyter Notebook on Open OnDemand

To use PyTorch in Jupyter Notebook on Open OnDemand/VDI, install ipykernel and ipywidgets:

mamba install ipykernel ipywidgets

Pull a PyTorch Singularity Container

Alternatively, you can pull and use a PyTorch singularity container:

singularity pull docker://pytorch/pytorch:2.1.0-cuda12.1-cudnn8-runtime

PyTorch on MIG Mode

Note: Currently only the gpu_test partition has MIG mode enabled.


# Get GPU card name
nvidia-smi -L

# Set CUDA_VISIBLE_DEVICES with the MIG instance
export CUDA_VISIBLE_DEVICES=MIG-5b36b802-0ab0-5f37-af2d-ac23f40ef62d

Or automate the process with:

export CUDA_VISIBLE_DEVICES=$(nvidia-smi -L | awk '/MIG/ {gsub(/[()]/,"");print $NF}')

Examples

For example scripts covering installation, and use cases, see our User Codes > AI > PyTorch repo.

External Resources:

]]>
27662