Affiliates – FASRC DOCS https://docs.rc.fas.harvard.edu Wed, 28 May 2025 15:12:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://docs.rc.fas.harvard.edu/wp-content/uploads/2018/08/fasrc_64x64.png Affiliates – FASRC DOCS https://docs.rc.fas.harvard.edu 32 32 172380571 Kempner Institute Partitions https://docs.rc.fas.harvard.edu/kb/kempner-partitions/ Tue, 31 Jan 2023 18:01:29 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=26013 Overview

For questions regarding usage and etiquette of the Kempner resources or the “Kempner Institute Computing Handbook“, please contact the Kempner Support. You can also ping their #cluster-users channel in the Kempner slack space with Kempner-cluster specific questions/concerns, and advice. For access to the slack channel please contact Nikki Shawn.

NOTE: Recent changes mean that not all members of a Kempner lab can run jobs, even if you belong to a Kempner lab group. You must first request access to run jobs from Kempner. Please consult your PI for details on gaining Kempner job privileges and a link to the Kempner access form.

Kempner expects its users to review the Kempner Institute Computing Handbook before using the cluster.

The Kempner Institute for the Study of Natural and Artificial Intelligence has their own purchased partition on the FASRC cluster.  These partitions are open to users affiliated with Kempner and their allocation is governed by the relative fairshare of each group. If yours is a Kempner-affiliated lab group and you have been given explicit access to run jobs in Kempner you can submit to the partition just as you would any other partition but you must indicate which fairshare pool you are using by adding #SBATCH --account=<kempner_fairshare_pool>.

A dashboard showing the relative allocations can be found here (RC VPN or Harvard network required).  The partition is the following:

  • kempner: This block of GPU’s contains 2112 Intel Ice Lake cores and 132 Nvidia A100 GPUs. Each node is water-cooled and contains 64 cores, 1TB of RAM, and 4 GPU’s. The nodes are interconnected with HDR Infiniband. This partition has a 7 day time limit. You must request less than 24 CPUs/GPU and 360GB/GPU.
  • kempner_h100: This block of GPU’s contains 9216 AMD Genoa cores and 384 Nvidia H100 GPUs. Each node is water-cooled and contains 96 cores, 1.5TB of RAM, and 4 GPU’s. The nodes are interconnected with NDR Infiniband. This partition has a 3 day time limit. You must request less than 48 CPUs/GPU and 512GB/GPU.
  • kempner_h200: This block of GPU’s contains 1120 Intel Sapphire Rapids cores and 40 Nvidia H200 GPUs. Each node is water-cooled and contains 112 cores, 1TB of RAM, and 4 GPU’s. The nodes are interconnected with NDR Infiniband. This partition has a 3 day time limit. You must request less than 48 CPUs/GPU and 512GB/GPU.
  • kempner_interactive: This block of GPU’s containers 192 Intel Ice Lake cores and 24 3g.20gb A100 MIG GPUs. For more on MIG mode see the Nvidia documentation. Each node is water-cooled and contains 64 cores, 1TB of RAM, and 8 MIG instances. The nodes are interconnected with HDR Infiniband. This partition has a 8 hour time limit. You must request less than 8 CPUs/GPU and 120GB/GPU.
  • kempner_requeue: This partition underlays the kempner, kempner_h100, kempner_h200, kempner_interactive partitions and is intended for work that can be requeued by higher priority work. This behavior is the same as the public requeue partitions of gpu_requeue and serial_requeue. Also similar to the public requeue partitions this partition is half cost with respect to fairshare. This partition has a 7 day time limit.

For more information about Slurm partitions on the FAS RC cluster, please refer to the Running Jobs document.  For information for how to run jobs on GPU’s on the rest of the Cannon cluster that are not in Kempner see this document.

PLEASE NOTE – HMS/Kempner: If you are one of the Kempner groups whose PI is based out of HMS, you will not be able to submit ordinary jobs to the FASRC cluster. If you mistakenly submit a job to one of our normal partitions, the job will pend forever. Please contact Spencer Pruitt at HMS if you are a Kempner HMS user who needs access.

Access

Access to the partition is via membership in a existing kempner lab group on the FASRC cluster and approval after filling out a Kempner access form. Please consult your PI for instructions and a link to the form. This process is managed by Kempner, FASRC staff cannot add you to Kempner resources directly.

  • For PIs who believe their lab should belong to kempner_users,  please contact Max Shad or Elise Porter at the Kempner Institute.
  • For individuals who believe they should have access, please consult your PI for instructions and a link to the Kempner access form or contact Kempner Support

Fairshare

FASRC’s general fairshare policy applies to the Kempner cluster. However in the case of Kempner cluster the fairshare is split from your normal lab fairshare into a specific pool for your lab or group.

  • kempner_<pi_lab>: Is the lab specific fairshare pool. Similar to the normal FASRC fairshare policy all users in these pools pull from the same common allocation block. In order to be eligible for this group, a user must first belong to the <pi_lab> group
    example – For a user who has been approved for access to be added to jharvard_kempner_lab fairshare pool, they would first need to be a member of the jharvard_lab group as their primary or secondary group. They would then also need to be approved by Kempner to run jobs. See Access above.
  • kempner_grads: This pool is for graduate students who are affiliated with Kempner but may or may not be members of Kempner affiliated labs. Each user in this pool has their own fairshare allocation, however there is a cap to the common fairshare of based off of the full group usage.
  • kempner_fellows: This pool is for postdocs and other fellows who are affiliated with Kempner but may or may not be members of Kempner affiliated labs. Each user in this pool has their own fairshare allocation, however there is a cap to the common fairshare of based off of the full group usage.
  • kempner_hms: This pool is for researchers from HMS who are affiliated with Kempner but do not have sponsorship via a lab.  Each user in this pool has their own fairshare allocation, however there is a cap to the common fairshare of based off of the full group usage. Spencer Pruitt from HMS manages access to this group. Please contact him for access questions.
  • kempner_undergrads: This pool is for undergraduate students who are affiliated with Kempner but may or may not be members of Kempner affiliated labs. Each user in this pool has their own fairshare allocation, however there is a cap to the common fairshare of based off of the full group usage.

By default users on FASRC resources utilize their primary group for fairshare, this also applies to Kempner. Therefore users using Kempner resources should include the --account flag and indicate what pool they wish to use when running jobs on the Kempner cluster. Kempner fairshare accounts are not to be used outside of the Kempner partitions, for other resources on Cannon users should use their normal fairshare accounts.

Contact Information

To contact specific individuals at the Kempner Institute, please browse to their contact page.

]]>
26013
Black Hole Initiative (BHI) Partitions https://docs.rc.fas.harvard.edu/kb/bhi-partitions/ Mon, 12 Jul 2021 13:25:12 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=24169 Overview

The Black Hole Initiative (BHI) has purchased their own partitions on the FASRC cluster.  These partitions are open to BHI members and their allocation is governed by the relative fairshare of the groups.  A dashboard showing the relative allocations can be found here (RC VPN or Harvard network required).  The partitions are broken down into the following:

  • blackhole: 1920 cores of water cooled Intel Cascade Lake, each node has 48 cores and 184 GB of RAM on the HDR Infiniband fabric.  This partition has no timelimit.
  • blackhole_priority: Overlaps the blackhole partition.  This partition is for priority work only and users must ask permission of the BHI faculty prior to running. Jobs in this partition will run ahead of jobs on the blackhole partition regardless of the users fairshare score.  This partition has no time limit.
]]>
24169
Geophysics Partitions https://docs.rc.fas.harvard.edu/kb/geophysics-partitions/ Tue, 23 Mar 2021 16:03:36 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=23873 Geophysics has their own purchased partitions on the FASRC cluster.  These partitions are open to select groups in Geophysics and their allocation is governed by the relative fairshare of the groups.  A dashboard showing the relative allocations can be found here (RC VPN or Harvard network required).  The partitions themselves are broken down into the following:

  • geophysics: 384 cores of water cooled Intel Cascade Lake, each node has 48 cores and 184 GB of RAM, these are connected by HDR Infiniband.  This partition has a 14 day timelimit.

For more information about Slurm partitions on the FAS RC cluster, please refer to the Running Jobs document.

]]>
23873
Other Institutions https://docs.rc.fas.harvard.edu/kb/other-institutions/ Tue, 26 May 2020 14:34:41 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=23534 This page is an addendum to our Qualifications page and is meant to help answer questions about other institutions which have queried us and whether they qualify for RC cluster access. This list is not comprehensive.

  • GSE – Our agreements with the Institute for Quantitative Social Science, who support some GSE groups, are very specific and all on-boarding to the cluster for their users is handled by IQSS. If unsure how to proceed, please discuss with your IQSS partners or contact FASRC.
  • GSD – We do not currently have a billing agreement with the Graduate School of Design for general use.
  • DCE/HES – We do not support or provide access for the Division of Continuing Education or Extension School.
  • SDM – We do not currently have a billing agreement with the School of Dental Medicine. Please contact us if you wish to discuss cluster access.
  • Harvard i-lab – We do not provide access or support for distinct entities such as the Harvard Innovation Lab. However, any of its staff who have active appointments in the FAS or SEAS may be individually qualified for an RC account of their own through the normal process.
  • EdLabs – We do not directly support the EdLabs, but do provide access to those members who have active appointments in the FAS or SEAS and are, therefore, individually qualified for an RC account of their own through the normal process
  • MIT – We do not provide access or support for MIT. However, those few FAS/SEAS faculty who are jointly appointed at MIT may choose to allow their MIT students to have RC accounts as collaborators under the RC lab group.
  • SAO – The Smithsonian Astrophysical Observatory, while located at the Harvard-Smithsonian Center for Astrophysics (CfA), is an outside entity and not a part of the FAS. Its members are therefore not supported by FAS RC. Only members of the CfA or researchers sponsored by CfA faculty are currently eligible for RC cluster accounts.
]]>
23534
SEAS Compute Resources https://docs.rc.fas.harvard.edu/kb/seas-compute-resources/ Thu, 23 Apr 2020 18:01:31 +0000 https://docs.rc.fas.harvard.edu/?post_type=epkb_post_type_1&p=23211 The Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) has a number of compute resources managed by FAS Research Computing. These compute partitions are open to all researchers at SEAS and their allocation is governed by the relative fairshare of the groups. The partitions themselves are broken down into the following:

  • seas_compute: This block of compute contains 5584 cores of compute ranging from Intel Cascade Lake to Intel Sapphire Rapids. This partition has a 7 day time limit.
  • seas_gpu: This block of GPU’s contains 4480 cores of compute ranging from Intel Ice Lake to AMD Genoa and 228 gpus ranging from Nvidia A100 to Nvidia H100. This partition has a 7 day time limit.  Interactive jobs on seas_gpu are limited to being less than 6 hours and no more than 2 cores.

seas_compute and seas_gpu are mosaic partitions, meaning they have a variety of hardware and interconnects. For users requiring specific types of hardware please use the --constraint option in Slurm. A full list of constraints can be found on the Running Jobs page. To get specific gpu models see the GPU section of the Running Jobs page. For more information about Slurm partitions on the FAS RC cluster, please refer to the Running Jobs document.

For researchers needing a secure environment, the FAS Secure Environment (FASSE) is a secure multi-tenant cluster environment to provide Harvard researchers access to a secure enclave for analysis of sensitive datasets with DUA‘s and IRB’s classified as Level 3.  Please see the FASSE cluster documentation for how to gain access. Note that a home folder on FASSE is separate from any home folder you might have on the FASRC (Cannon) cluster. Data from the secure level 3 (FASSE) environment should not be transferred into level 2 space (Cannon).

]]>
23211
Harvard University Center for the Environment (HUCE) Partitions https://docs.rc.fas.harvard.edu/kb/huce-partitions/ Wed, 16 Oct 2019 14:44:43 +0000 https://www.rc.fas.harvard.edu/?page_id=22008 Overview

The Harvard University Center for the Environment (HUCE) has their own purchased partitions on the FASRC cluster.  These partitions are open to select groups in HUCE and their allocation is governed by the relative fairshare of the groups.  A dashboard showing the relative allocations can be found here (RC VPN or Harvard network required).  The partitions are as follows and are on the HDR Infiniband fabric with no timelimit.

  • huce_bigmem:  64 cores of Intel IceLake, each node has 64 cores and 2TB of RAM.
  • huce_cascade: 6672 cores of water cooled Intel Cascade Lake, each node has 48 cores and 184 GB of RAM. Subject to requeue by huce_cascade_priority.
  • huce_cascade_priority:  Overlaps huce_cascade and is accessible only to groups with a fairshare score higher than 0.75, the fairshare threshold may change in the future based on decisions by the HUCE Cluster Allocation Committee.  This partition will requeue jobs in huce_cascade to make room for jobs submitted to huce_cascade_priority.  Jobs on huce_cascade are requeued only if needed, and starting with jobs that are at the lowest fairshare and have run for the least amount of time.
  • huce_ice: 2432 cores of water cooled Intel Ice Lake, each node has 64 cores and 500 GB of RAM.

For more information about Slurm partitions on the FAS RC cluster, please refer to the Running Jobs document.

Usage and Reservations

In general the usage of the the HUCE partitions is governed by fairshare.  Since the resource is only of a certain size users should be aware of how much their runs will impact their labs usage and hence priority in the queue.  The scalc utility can be used to project how much usage a specific job will have.  seff, seff-array, and seff-account can be used to find out how much memory was actually used by a job and thus tune job requests to be of the right size.  If you want to learn more about optimizing usage of the cluster and how fairshare works feel free to contact FASRC and their staff will be more than happy to work with you and your lab.

Labs may request to have access to a reservation for a limited amount of time.  Reservations set aside compute for the groups immediate use.  When you request a reservation please include the following information:

  1. Which users need access.
  2. The characteristics of the jobs that will be run (i.e. how many cores, how much memory, how long, how many jobs).
  3. How long you will need the reservation for.
  4. Why you need the reservation.

This information will help to decide what resources to give the reservation.  Reservations can be set up on a recurring basis to aid in development work or can be used if there is a deadline approaching that needs to be met.

Harvard Climate Modeling Wiki

The climate modeling groups also maintain a wiki that contains information about common software and workflows used in climate modeling.  The wiki administrators of this site are Andrew Conahan, Lei Wang, Packard Chan.  Please contact them for more information on how to contribute.

Discussion List

HUCE maintains a Google Group for cluster discussion at fasrc-huce-users@g.harvard.edu  This list is appropriate for talking about cluster usage, code compilation, and other topics related to high performance computing. Thanks to Melissa Sulprizio (Jacob Lab) for working with HUIT to get this setup.  Current owners (listed below) can add new users to the list:

Melissa Sulprizio
msulprizio@g.harvard.edu

Zhiming Kuang
kuang@fas.harvard.edu

]]>
22008
Additional Affiliate Resources https://docs.rc.fas.harvard.edu/kb/other-resources/ Thu, 01 Nov 2018 18:37:25 +0000 https://www.rc.fas.harvard.edu/?page_id=19240 External computing resources

The following is a list of additional documentation outside FAS RC which may be useful to the research community:

Further Reading – Links to topics of interest to users of FAS RC services

]]>
19240
Center for Education Policy Research (CEPR) FAQs https://docs.rc.fas.harvard.edu/kb/center-for-education-policy-research-cepr-faqs/ Thu, 01 Jun 2017 11:18:04 +0000 https://www.rc.fas.harvard.edu/?page_id=16428 Q. What do I need to do to sign up for a FAS Research Computing account?

A. Complete the sign up form process at https://portal.rc.fas.harvard.edu/request/account/new.
Q. If I already have a Duo token from Harvard do I need to get another one for RC?
A. Yes, RC has a unique 2 factor system. You can either add a second token to your Duo system, or install Google Authenticator and add your token to that phone application. Please follow the instructions here: OpenAuth tool
For more detailed instructions on setting up VPN in general see VPN-Setup.
Q. While on VPN I can’t browse the internet?
A. Please ensure your browser is respecting system proxy settings. Please see proxy instructions here for more details.

]]>
16428
Account Qualifications and Affiliations https://docs.rc.fas.harvard.edu/kb/account-qualifications/ Fri, 20 Mar 2015 13:55:53 +0000 https://rc.fas.harvard.edu/?page_id=13185 If you are unsure whether or not you qualify for an RC account or what arrangements your school or department has with FAS Research Computing, this document may help you sort out these questions before requesting an account. If you then wish to proceed, click the following link to proceed to the account request tool.

POLICIES

Cluster access is intended only for legitimate purposes which benefit research at Harvard University. Access must be authorized by the faculty or management of the FAS and our partners, and by the staff of Research Computing. Account access should only be granted for the purposes necessary to accomplish the goals of Harvard University and its research projects. All active FAS RC account holders are subscribed to our notifications mailing list which is a requirement for all users.

Cluster usage and additional resources may be subject to charges to the PI, school, or department. All billing is done via Harvard internal billing codes at the Tub/school level. See our billing FAQ.

All account holders, whether Harvard affiliates or outside collaborators agree to be held accountable by the Harvard University Electronic Access and Information Security polices: http://huit.harvard.edu/information-technology-policies. In addition, researchers should make themselves familiar with the university research policies maintained by the Provost’s Office.

TIPS BEFORE SIGNUP

  • The majority of users will sign up as POSTDOC/RESEARCHER, GRAD STUDENT, UNDERGRAD, or STAFF. See Job Titles below for clarification.
  • When requesting FACULTY status during sign-up, you must have a Harvard faculty appointment with a supported affiliation or your request may be rejected. Faculty accounts have additional controls and privileges and are provided only to those with faculty appointments.
  • If you have a non-faculty appointment but have PI rights with a supported affiliation and your own 33-digit billing string(s), you should select NON-FACULTY PI (your account and PI rights will need to be vetted before approval. FASRC may contact you for more information).
  • If you are a Harvard faculty member but wish to sign up as an individual collaborator under another faculty member’s cluster group, please choose POSTDOC/RESEARCHER as choosing FACULTY will require billing information and start the new lab setup process.
  • If you are faculty member at another university collaborating with a Harvard PI, please choose EXTERNAL.
  • If you are student or researcher at another university or company collaborating with a Harvard PI, please choose EXTERNAL.

 

PRIMARY AFFILIATED SCHOOLS

  • FAS – Research Computing is primarily funded by the Faculty of Arts and Sciences. As such, active FAS affiliates (faculty, staff, researchers, post-docs, etc.) qualify for an RC account and general cluster use. FAS manages the basic cost, so there is no cost to FAS departments for access. Additional dedicated data storage is available at a cost through our Storage Service Center.
  • Rowland Institute – Rowland is considered a part of the FAS and its appointed staff are eligible for RC accounts and cluster access. Additional dedicated data storage is available at a cost through our Storage Service Center.
  • SEAS – Research Computing has an arrangement with the School of Engineering and Applied Sciences to provide cluster access accounts and cluster storage for faculty, staff, researchers and other appointed users. Accounts and storage usage are billed to SEAS at the school level. Additional dedicated data storage is available at a cost through our Storage Service Center.
  • HCSPH – Research Computing has an arrangement with the Harvard T. H. Chan School of Public Health. Cluster and/or storage usage for accounts for faculty, staff, researchers and other appointed user are billed to HCSPH at the school level. For more details and account sign-up instructions, please see our dedicated HCSPH overview and Q&A pages. Additional dedicated data storage is available at a cost through our Storage Service Center.
  • IQSS/GSD/GSE – Our agreements with the Institute for Quantitative Social Science are very specific and all on-boarding to the cluster for their users is handled by IQSS. If unsure how to proceed, please discuss with your IQSS partners or contact FASRC.
  • HBS – Faculty members from The Harvard Business School should first contact HBS research computing to determine your needs and before signing up. Research Computing has a billing arrangement with HBS. Cluster and/or storage usage for faculty, staff, researchers and other appointed user are billed to HBS at the school level.  Additional dedicated data storage is available at a cost through our Storage Service Center.
  • HKS – Research Computing has an arrangement with The Harvard Kennedy School of Government to provide cluster access accounts and cluster storage for faculty, staff, researchers and other appointed users. Accounts and storage usage are billed to HKS at the school level.  Additional dedicated data storage is available at a cost through our Storage Service Center.
  • HLS – Research Computing has an arrangement with The Harvard Law School to provide cluster access accounts and cluster storage for faculty, staff, researchers and other appointed users. Accounts and storage usage are billed to HLS at the school level.  Additional dedicated data storage is available at a cost through our Storage Service Center.
  • Radcliffe – Research Computing has an arrangement with Radcliffe to provide cluster access accounts and cluster storage for faculty, staff, researchers and other appointed users. Accounts and storage usage are billed to HLS at the school level.  Additional dedicated data storage is available at a cost through our Storage Service Center.

UNAFFILIATED SCHOOLS

  • HMS – The Harvard Medical School maintains its own cluster (Orchestra) which is available to you. You can find more information from them at: https://rc.hms.harvard.edu/.
    HMS has informed us that they do not sponsor or otherwise cover the costs of any HMS PI/researchers using the FAS cluster. We do not have an MoU with HMS that allows us to bill charges directly to the school.
    If you are collaborating with a PI at HSPH and need an account for that purpose, you should select the Researcher/Postdoc job type and select that HSPH PI as your sponsor.
  • Hospitals – The hospitals are not Harvard-owned entities and we are not able to bill or support external entities. The Harvard Medical School maintains its own cluster (Orchestra) which you are likely eligible to use. You can find more information from them at: https://rc.hms.harvard.edu/.
    If you are collaborating with a PI at HSPH and need an account for that purpose, you should select the Researcher/Postdoc job type and select that HSPH PI as your sponsor.
  • Harvard University Library – We have a very limited MoU with only one department of the university library. If additional departments which to use our services, please contact us to discuss crafting an MoU for your department/group.
  • SAO/Smithsonian – At this time we cannot support or provide access for Smithsonian Astrophysical Observatory (SAO) staff and researchers. If and when we have an agreement and billing path, we will notify the SAO leadership.
  • OTHER – See our Other Institutions page for additional information

If your school does not have a MoU (memorandum of understanding) with FASRC and you are interested in establishing one to facilitate billing for your school, please have your administration or finance reach out to FASRC to start the process.

Please Note: This option is only available to schools within the university which can provide TUB level (33 digit code) billing strings. The hospitals and Smithsonian are not eligible as they are outside entities not belonging to the college. FASRC is a cost center and cannot bill to POs or invoice outside entities, only internally via university 33-digit codes.

JOB TITLES

You will be asked to select a Job Title during signup. Your selection should match your appointment or employment type no matter your intended use.

  • STAFF – You are an employee of Harvard. This may include some special appointments. If you are paid by Harvard and are not a Postdoc/Researcher or Faculty member, this is likely the right choice for you. Lecturers and visiting scholars or outside faculty who are on a term appointment should also use this designation.
  • UNDERGRAD – You are an enrolled undergraduate working under a PI who belongs to one of the Affiliations above and who will approve/sponsor your account.
  • GRAD STUDENT – You are an enrolled graduate student working under a PI who belongs to one of the Affiliations above and who will approve/sponsor your account.
  • POSTDOC/RESEARCHER – Your are a researcher (post-doctoral or a researcher working in a similar capacity) with a PI who belongs to one of the Affiliations above and who will approve/sponsor your account. Harvard faculty who are simply collaborating with another PI should also use this designation to avoid triggering billing and vetting required for a new cluster lab group.
  • FACULTY – You are an appointed Harvard faculty member with a supported affiliation who wishes to establish your own cluster lab group. Faculty accounts have additional controls and privileges that are provided only to those with faculty appointments.
    PLEASE NOTE: Faculty signups require vetting and will take longer to process. 
  • NON-FACULTY PI – You have a Harvard non-faculty PI appointment/PI rights. Non-Faculty PI accounts have additional controls and privileges that are provided with those appointments.
    PLEASE NOTE: Non-Faculty PI signups require vetting and will take longer to process. A 33-digit billing code is required at signup. 
  • EXTERNAL – You are an external collaborator working under a Harvard group or PI who belongs to one of the Affiliations above and who will approve/sponsor your account. The PI/Sponsor is responsible for the usage and conduct of any external collaborators. If the account will be active for more than 90 days, the PI and user should work to obtain a Harvard “Person of Interest” appointment. The PI/Sponsor is responsible for the usage and conduct of any external collaborators. A Standard PoI appointment lasts for one year: PoI Guide
  • VENDOR – This is a very specific type of title used by the Cores and FASRC to identify vendor accounts. These are generally short-lived accounts to allow vendors to perform work on instruments or other resources.

QUALIFICATIONS AND EXPECTATIONS

Except for External Collaborators, an active appointment or enrollment with Harvard University is required for access to Research Computing resources. Each account must be tied to an individual (account sharing is not allowed) and sponsored by a Harvard PI who has a lab group on the FASRC cluster. It is expected that when that individual no longer has need of the account, the lab or PI responsible for sponsoring their account will notify us of their leaving. This is especially important if the account incurs a charge (billing applies until we are notified to close an account) or has access to sensitive research data.  If it is necessary to extend a user’s access beyond their appointment/enrollment, accommodations can be made with PI/faculty approval.

An exception can be made in some cases to allow collaborators access to Research Computing resources and/or lab data. As with all accounts, PI/faculty approval is required. Please contact rchelp@rc.fas.harvard.edu for any questions surrounding collaborator access.

Please note: Sharing of accounts is not allowed and in direct violation of security policies. Shared accounts are subject to being immediately disabled and the PI/faculty member sponsoring the account notified.

 

]]>
13185
Harvard T.H. Chan School of Public Health https://docs.rc.fas.harvard.edu/kb/harvard-school-of-public-health/ Wed, 23 Oct 2013 13:04:31 +0000 https://rcwebsite2.rc.fas.harvard.edu/?page_id=8559
  • HSPH + FAS Research Computing Overview
  • HSPH + FAS Research Computing Q&A
  • ]]>
    8559