MATLAB – FASRC DOCS https://docs.rc.fas.harvard.edu Thu, 27 Mar 2025 15:34:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://docs.rc.fas.harvard.edu/wp-content/uploads/2018/08/fasrc_64x64.png MATLAB – FASRC DOCS https://docs.rc.fas.harvard.edu 32 32 172380571 MATLAB Figures in a Batch Mode https://docs.rc.fas.harvard.edu/kb/matlab-figures-in-a-batch-mode/ Tue, 29 Sep 2015 19:27:09 +0000 https://rc.fas.harvard.edu/?page_id=14050 This web-page illustrates how to create MATLAB figures in a batch-mode (without a GUI) on the cluster. This is especially useful if you would like to do your computations and post-processing all together in a consistent computing environment. Below is an example function which generates data, creates a figure, and saves it as a file on the cluster:

%=========================================================================================
% Program: print_figure.m
%
% Usage:
% matlab -nodesktop -nodisplay -nosplash -r "print_figure('file_name','file_format');exit"
%=========================================================================================
function [] = print_figure( outfile, file_format )
  disp('A simple test to illustrate generating figures in a batch mode.');
  x = 0:.1:1;
  A = exp(x);
  plot(x,A);
  print(outfile,file_format);
end

The specific example saves the figure as a PNG (24-bit) image. For complete list of available image formats, please refer to the official MATLAB documentation. Here is an example SLURM batch-job submission script for sending the job to the queue:

#!/bin/bash
#SBATCH -J print_figure
#SBATCH -o print_figure.out
#SBATCH -e print_figure.err
#SBATCH -p serial_requeue
#SBATCH -c 1
#SBATCH -t 30
#SBATCH --mem=4G
module load matlab
matlab -nodesktop -nodisplay -nosplash -r "print_figure('out','-dpng');exit"

Please, notice the options -nodesktop -nodisplay -nosplash on the MATLAB command line. These reassure the figure is generated properly without the GUI. If you name this script, e.g., print_figure.run, it is submitted to the queue with

sbatch print_figure.run

Upon job completion the file “out.png” is generated in the work directory.

]]>
14050
Parallel MATLAB with PCT and DCS https://docs.rc.fas.harvard.edu/kb/parallel-matlab-pct-dcs/ Mon, 29 Jun 2015 22:18:19 +0000 https://rc.fas.harvard.edu/?page_id=13680 NOTE: matlab-default is no longer needed to run parallel MATLAB applications. This has been restored to matlab only. Please update your workflows accordingly to reflect this change.

Introduction

This page is intended to help you with running parallel MATLAB codes on the FASRC cluster. The latest software modules supporting parallel computing with MATLAB available on the cluster are:

matlab/R2024b-fasrc01
matlab/R2022b-fasrc01
matlab/R2021a-fasrc01

Parallel processing with MATLAB is performed with the help of two products, Parallel Computing Toolbox (PCT) and Distributed Computing Server (DCS).

Parallel Computing Toolbox

Currently, PCT provides up to 32 workers (MATLAB computational engines) to execute applications locally on a multicore machine. This means that with the toolbox one could run parallel MATLAB codes locally on the compute nodes and use up to 32 cores.

Parallel FOR loops (parfor)

Below is a simple code illustrating the use of PCT to calculate PI via a parallel Monte-Carlo method. This example also illustrates the use of parfor (parallel FOR) loops. In this scheme, suitable FOR loops could be simply replaced by parallel FOR loops without other changes to the code:

%============================================================================
% Parallel Monte Carlo calculation of PI
%============================================================================
parpool('local', str2num(getenv('SLURM_CPUS_PER_TASK')))
R = 1;
darts = 1e7;
count = 0;
tic
parfor i = 1:darts
   % Compute the X and Y coordinates of where the dart hit the...............
   % square using Uniform distribution.......................................
   x = R*rand(1);
   y = R*rand(1);
   if x^2 + y^2 <= R^2
      % Increment the count of darts that fell inside of the.................
      % circle...............................................................
     count = count + 1; % Count is a reduction variable.
   end
end
% Compute pi.................................................................
myPI = 4*count/darts;
T = toc;
fprintf('The computed value of pi is %8.7f.n',myPI);
fprintf('The parallel Monte-Carlo method is executed in %8.2f seconds.n', T);
delete(gcp);
exit;

Important: When using parpool in MATLAB, you need include the statement parpool('local', str2num(getenv('SLURM_CPUS_PER_TASK'))) in your code. This statement tells MATLAB to start SLURM_CPUS_PER_TASK workers on the local machine (the compute node where your job lands). When the parallel computation is done, the MATLAB workers are released with the statement delete(gcp). If the above code is named, e.g., pfor.m, it can be sent to the queue with the below batch-job submission script. It starts a MATLAB parallel job with 8 workers:

#!/bin/bash
#SBATCH -J pfor
#SBATCH -o pfor.out
#SBATCH -e pfor.err
#SBATCH -N 1
#SBATCH -c 8
#SBATCH -t 0-00:30
#SBATCH -p shared
#SBATCH --mem=32G
 
module load matlab/R2018b-fasrc01
srun -c $SLURM_CPUS_PER_TASK matlab -nosplash -nodesktop -r "pfor"

The highlighted (in red) SBATCH directives reassure that there are 8 processing cores for the calculation, and they all reside on the same compute node.
If the submission script is named pfor.run, it is submitted to the queue by typing in:

$ sbatch pfor.run
Submitted batch job 1885302

When the job has completed the pfor.out output file is generated.

                            < M A T L A B (R) >
                  Copyright 1984-2018 The MathWorks, Inc.
                   R2018b (9.5.0.944444) 64-bit (glnxa64)
                              August 28, 2018
To get started, type doc.
For product information, visit www.mathworks.com.
Starting parallel pool (parpool) using the 'local' profile ...
connected to 8 workers.
ans =
 Pool with properties:
            Connected: true
           NumWorkers: 8
              Cluster: local
        AttachedFiles: {}
    AutoAddClientPath: true
          IdleTimeout: 30 minutes (30 minutes remaining)
          SpmdEnabled: true
The computed value of pi is 3.1410644.
The parallel Monte-Carlo method is executed in     2.14 seconds.

Any runtime errors would go to the file pfor.err.

Single Program Multiple Data (SPMD)

In addition, MATLAB also provides a single program multiple data (SPMD) parallel programming model, which allows for a greater control over the parallelization — tasks could be distributed and assigned to parallel processes ( labs or workers in MATLAB’s terminology ) depending on their ranks. The below code provides a simple illustration — it prints out the worker rank from each MATLAB lab:

%====================================================================
% Illustration of SPMD Parallel Programming model with MATLAB
%====================================================================
parpool('local', str2num(getenv('SLURM_CPUS_PER_TASK')))
% Start of parallel region...........................................
spmd
  nproc = numlabs;  % get total number of workers
  iproc = labindex; % get lab ID
  if ( iproc == 1 )
     fprintf ( 1, ' Running with  %d labs.n', nproc );
  end
  for i = 1: nproc
     if iproc == i
        fprintf ( 1, ' Rank %d out of  %d.n', iproc, nproc );
     end
  end
% End of parallel region.............................................
end
delete(gcp);
exit;

If the code is named spmd_test.m, it could be sent to the queue with this script

#!/bin/bash
#SBATCH -J spmd_test
#SBATCH -o spmd_test.out
#SBATCH -e spmd_test.err
#SBATCH -N 1
#SBATCH -c 8
#SBATCH -t 0-00:30
#SBATCH -p shared
#SBATCH --mem=4000
 
module load math/matlab-R2018b-farc01
srun -c $SLURM_CPUS_PER_TASK matlab -nosplash -nodesktop -r "spmd_test"

If the batch-job submission script is named spmd_test.run, then it is sent to the queue with

$ sbatch spmd_test.run
Submitted batch job 1896986

The output is printed out to the file spmd_test.out:

                            < M A T L A B (R) >
                  Copyright 1984-2018 The MathWorks, Inc.
                   R2018b (9.5.0.944444) 64-bit (glnxa64)
                              August 28, 2018
To get started, type doc.
For product information, visit www.mathworks.com.
Starting parallel pool (parpool) using the 'local' profile ...
connected to 8 workers.
ans =
 Pool with properties:
            Connected: true
           NumWorkers: 8
              Cluster: local
        AttachedFiles: {}
    AutoAddClientPath: true
          IdleTimeout: 30 minutes (30 minutes remaining)
          SpmdEnabled: true
Lab 1:
   Running with  8 labs.
   Rank 1 out of  8.
Lab 2:
   Rank 2 out of  8.
Lab 3:
   Rank 3 out of  8.
Lab 4:
   Rank 4 out of  8.
Lab 5:
   Rank 5 out of  8.
Lab 6:
   Rank 6 out of  8.
Lab 7:
   Rank 7 out of  8.
Lab 8:
   Rank 8 out of  8.
Parallel pool using the 'local' profile is shutting down.

Distributed Computing Server

The DCS allows for a larger number of MATLAB workers to be used on a single node and/or across several compute nodes. The current DCS license we have on the cluster allows for using up to 256 MATLAB workers. DCS is integrated with SLURM and works with MATLAB versions R2017a, R2017b, R2018a and R2018b, available with modules matlab/R2017a-fasrc02, matlab/R2017b-fasrc01, matlab/R2018a-fasrc01 and matlab/R2018b-fasrc01. The below example steps describe how to set up and use DCS on the Research Computing cluster:
(1) Log on to the cluster and start an interactive / test bash shell.

$ salloc -p test -N 1 -c 4 -t 0-06:00 --mem=16G bash

(2) Start MATLAB on the command line and configure DCS to run parallel jobs on the cluster by calling configCluster. This command needs to be run only once for each MATLAB version.

  • Start an interactive bash-shell:
# Load a MATLAB software module, e.g.,
$ module load matlab/R2018b-fasrc01
# Start MATLAB interactively without a GUI
$ matlab -nosplash -nodesktop -nodisplay
  • Run configCluster in the MATLAB shell:
>> configCluster
    Must set WallTime and QueueName before submitting jobs to [cluster name].  E.g.
    >> c = parcluster('cannon');
    >> % 5 hour walltime
    >> c.AdditionalProperties.WallTime = '05:00:00';
    >> c.AdditionalProperties.QueueName = 'test-queue';
    >> c.saveProfile

(3) Setup job parameters, e.g., Wall Time, queue / partition, Memory-Per-CPU, etc. The below example illustrates how this can be done interactively. Once these parameters are set up, their values become default unless changed.

>> c = parcluster('cannon');                    % Define a cluster object
>> c.AdditionalProperties.WallTime = '05:00:00'; % Time limit
>> c.AdditionalProperties.QueueName = 'shared';  % Partition
>> c.AdditionalProperties.MemUsage = '4000';     % Memory per CPU in MB
>> c.saveProfile                                 % Save cluster profile. This becomes default until changed

(4) Display parallel cluster configuration with c.AdditionalProperties.
NOTE: This lists the available cluster options and their current values. These options could be set up as desired.

>> c.AdditionalProperties
ans =
  AdditionalProperties with properties:
              AccountName: ''
     AdditionalSubmitArgs: ''
               Constraint: ''
    DebugMessagesTurnedOn: 0
              GpusPerNode: 0
                 MemUsage: '4000'
             ProcsPerNode: 0
                QueueName: 'shared'
                 WallTime: '05:00:00'

(5) Submit parallel DCS jobs. There are two ways to submit parallel DCS jobs – from within MATLAB, and directly through SLURM.

Submitting DCS jobs from within MATLAB

We will illustrate submitting DCS jobs from within MATLAB with a specific example. Below is a simple function evaluating the integer sum from 1 through N in parallel:

%==========================================================
% Function: parallel_sum( N )
%           Calculates integer sum from 1 to N in parallel
%==========================================================
function s = parallel_sum(N)
  s = 0;
  parfor i = 1:N
    s = s + i;
  end
  fprintf('Sum of numbers from 1 to %d is %d.n', N, s);
end

Use the batch command to submit parallel jobs to the cluster. The batch command will return a job object which is used to access the output of the submitted jobs. See the example below and refer to the official MATLAB documentation for more help on batch. This assumes that the MATLAB function is named parallel_sum.m. Note that these jobs will always request n+1 CPU cores, since one worker is required to manage the batch job and pool of workers. For example, a job that needs 8 workers will consume 9 CPU cores.

% Define a cluster object
>> c = parcluster('cannon');
% Define a job object using batch
>> j = c.batch(@parallel_sum, 1, {100}, 'pool', 8);

Notice, that this will start a job with one more MATLAB worker (9 instead of 8). This is because one parallel instance is required to manage the pool of workers (see below).

>> j = c.batch(@parallel_sum, 1, {100}, 'pool', 8);
additionalSubmitArgs =
    '--ntasks=9 -c 1 --ntasks-per-core=1 -p shared -t 05:00:00 --mem-per-cpu=4000 --licenses=MATLAB_Distrib_Comp_Engine:9'

You can quire the job status with j.Status

>> j.State
ans =
    'finished'

Once the job completes, we can retrieve the job results. This is done by calling the function fetchOutputs.

>> j.fetchOutputs{:}
ans =
        5050

NOTE: fetchOutputs is used to retrieve function output arguments. Data that has been written to files on the cluster needs to be retrieved directly from the filesystem.
If needed, one may also access job log files. This is particularly useful for debugging. This is done with the c.getDebugLog(j) command, e.g.,

>> c.getDebugLog(j)
LOG FILE OUTPUT:
Node list: holy7c[03205-03206]
mpiexec.hydra -l -n 9 /n/sw/helmod/apps/centos7/Core/matlab/R2018b-fasrc01/bin/worker -parallel
[3]
[3]                             < M A T L A B (R) >
[3]                   Copyright 1984-2018 The MathWorks, Inc.
[3]                    R2018b (9.5.0.944444) 64-bit (glnxa64)
[3]                               August 28, 2018
[3]
[4]
[4]                             < M A T L A B (R) >
[4]                   Copyright 1984-2018 The MathWorks, Inc.
[4]                    R2018b (9.5.0.944444) 64-bit (glnxa64)
[4]                               August 28, 2018
[4]
[5]
[5]                             < M A T L A B (R) >
[5]                   Copyright 1984-2018 The MathWorks, Inc.
[5]                    R2018b (9.5.0.944444) 64-bit (glnxa64)
[5]                               August 28, 2018
[5]
[6]
[6]                             < M A T L A B (R) >
[6]                   Copyright 1984-2018 The MathWorks, Inc.
[6]                    R2018b (9.5.0.944444) 64-bit (glnxa64)
[6]                               August 28, 2018
[6]
[7]
[7]                             < M A T L A B (R) >
[7]                   Copyright 1984-2018 The MathWorks, Inc.
[7]                    R2018b (9.5.0.944444) 64-bit (glnxa64)
[7]                               August 28, 2018
[7]
[8]
[8]                             < M A T L A B (R) >
[8]                   Copyright 1984-2018 The MathWorks, Inc.
[8]                    R2018b (9.5.0.944444) 64-bit (glnxa64)
[8]                               August 28, 2018
[8]
[3]
[4]
[6]
[7]
[5]
[8]
[3] To get started, type doc.
[4] To get started, type doc.
[5] To get started, type doc.
[6] To get started, type doc.
[8] To get started, type doc.
[3] For product information, visit www.mathworks.com.
[5] For product information, visit www.mathworks.com.
[5]
[8] For product information, visit www.mathworks.com.
[8]
[4] For product information, visit www.mathworks.com.
[4]
[6] For product information, visit www.mathworks.com.
[6]
[3]
[7] To get started, type doc.
[7] For product information, visit www.mathworks.com.
[7]
[0]
[0]                             < M A T L A B (R) >
[0]                   Copyright 1984-2018 The MathWorks, Inc.
[0]                    R2018b (9.5.0.944444) 64-bit (glnxa64)
[0]                               August 28, 2018
[0]
[1]
[1]                             < M A T L A B (R) >
[1]                   Copyright 1984-2018 The MathWorks, Inc.
[1]                    R2018b (9.5.0.944444) 64-bit (glnxa64)
[1]                               August 28, 2018
[1]
[2]
[2]                             < M A T L A B (R) >
[2]                   Copyright 1984-2018 The MathWorks, Inc.
[2]                    R2018b (9.5.0.944444) 64-bit (glnxa64)
[2]                               August 28, 2018
[2]
[0]
[1]
[2]
[0] To get started, type doc.
[1] To get started, type doc.
[2] To get started, type doc.
[1] For product information, visit www.mathworks.com.
[2] For product information, visit www.mathworks.com.
[0] For product information, visit www.mathworks.com.
[1]
[0]
[2]
[0] Sending a stop signal to all the labs...
[0] 2019-02-26 15:30:18 | About to exit MATLAB normally
[0] 2019-02-26 15:30:19 | About to exit with code: 0
Exiting with code: 0

When the results are no longer needed the job could be deleted.

% Delete the job after the results are no longer needed
j.delete

Submitting DCS jobs directly through SLURM

Parallel DCS jobs could be submitted directly from the Unix command line through SLURM. For this, in addition to the MATLAB source, one needs to prepare a MATLAB submission script with the job specifications. An example is shown below:

%==========================================================
% MATLAB job submission script: parallel_batch.m
%==========================================================
c = parcluster('cannon');
c.AdditionalProperties.QueueName = 'shared';
c.AdditionalProperties.WallTime = '05:00:00';
c.AdditionalProperties.MemUsage = '4000';
j = c.batch(@parallel_sum, 1, {100}, 'pool', 8);
exit;

If this is script is named, for instance, parallel_batch.m, it is submitted to the queue with the help of the following SLURM batch-job submission script:

#!/bin/bash
#SBATCH -J parallel_sum_DCS
#SBATCH -o parallel_sum_DCS.out
#SBATCH -e parallel_sum_DCS.err
#SBATCH -p shared
#SBATCH -c 1
#SBATCH -t 0-00:20
#SBATCH --mem=4000
 
srun -c 1 matlab -nosplash -nodesktop -r "parallel_batch"

Assuming the above script is named parallel_sum_DCS.run, for instance, the job is submitted as usual with

sbatch parallel_sum_DCS.run

NOTE: This scheme dispatches 2 jobs – one serial that spawns the actual DCS parallel jobs, and another, the actual parallel job.
Once submitted, the DCS parallel job can be monitored and managed directly through SLURM.

$ sacct
       JobID    JobName  Partition    Account  AllocCPUS      State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
1916487      parallel_+     shared   rc_admin          1  COMPLETED      0:0
1916487.bat+      batch              rc_admin          1  COMPLETED      0:0
1916487.ext+     extern              rc_admin          1  COMPLETED      0:0
1916487.0        matlab              rc_admin          1  COMPLETED      0:0
1916831            Job3     shared   rc_admin          9  COMPLETED      0:0
1916831.bat+      batch              rc_admin          8  COMPLETED      0:0
1916831.ext+     extern              rc_admin          9  COMPLETED      0:0
1916831.0     pmi_proxy              rc_admin          2  COMPLETED      0:0

After the job completes, one can fetch results and delete job object from within MATLAB. If program writes directly to disk fetching is not necessary.

>> j.fetchOutputs{:};
>> j.delete;

References

]]>
13680
MATLAB Parallel Computing Toolbox simultaneous job problem https://docs.rc.fas.harvard.edu/kb/matlab-pct-simultaneous-job-problem/ Fri, 14 Nov 2014 10:32:41 +0000 https://rc.fas.harvard.edu/?page_id=12680  

Introduction

This document describes a potential problem that occurs when using the Parallel Computing Toolbox (PCT) on the FASRC cluster. If you are not familiar with the PCT, please read our companion document first.
This problem only affects users submitting multiple jobs simultaneously to SLURM on the cluster using the Parallel Computing Toolbox or the Distributed Computing Server. If you are unsure if this affects your workflow, please contact RCHelp.

Description of the problem

Sometimes multiple parallel MATLAB jobs using the Parallel Computing Toolbox (PCT) would crash. The usual scenario is that the first job would run, but the subsequent jobs would hang or crash as MATLAB won’t allow for a second matlabpool to open.

Analysis and resolution of the problem

When a person submit multiple jobs that are all using PCT for parallelization, the multiple matlabpools that get created have the ability to interfere with one another and this can lead to errors and early termination of scripts.
The MATLAB PCT requires a temporary Job Storage Location where it stores information about the MATLAB pool that is in use. This is simply a directory on the filesystem that MATLAB writes various files to in order to coordinate the parallelization of the matlabpool. By default, this information is stored in /home/YourUsername/.matlab/ (the default JobStorageLocation). When submitting multiple jobs to SLURM that will all use the PCT, all of the jobs will attempt to use this default location for storing job information, thereby creating a race condition where one job modifies the files that were put in place by another. Clearly, this situation must be avoided.
The solution is to have each of your jobs that will use the PCT set a unique location for storing job information. To do this, a temporary directory must be created before launching MATLAB in your submission script and then the matlabpool must be created to explicitly use this unique temporary directory.
The following is an example batch job submission script to do this:

#!/bin/bash
#
#SBATCH -J par_for_test
#SBATCH -p general
#SBATCH -t 0-0:30
#SBATCH -n 12
#SBATCH -N 1
#SBATCH --mem-per-cpu=2000
#SBATCH -o par_for_test.out
#SBATCH -e par_for_test.err
 
module load math/matlab-R2014a
 
# Create a local work directory
mkdir -p /scratch/$USER/$SLURM_JOB_ID
matlab -nosplash -nodesktop -r "pfor"
 
# Cleanup local work directory
rm -rf /scratch/$USER/$SLURM_JOB_ID

Also, the corresponding MATLAB script needs to include these lines:

% create a local cluster object
pc = parcluster('local')
 
% explicitly set the JobStorageLocation to the temp directory that was
% created in your sbatch script
pc.JobStorageLocation = strcat('/scratch/YourUsername/', getenv('SLURM_JOB_ID'))
 
% start the parallel pool with 12 workers
matlabpool(pc, 12)

NOTE: MATLAB discontinues the use of matlabpool and replaces this with parpool in release R2013b and later. Also, one is able to deploy unlimited MATLAB workers on a compute node with the latest installations.

[pkrastev@holy2a18302 test]$ cat par_for_test.out
 
< M A T L A B (R) >
Copyright 1984-2014 The MathWorks, Inc.
R2014a (8.3.0.532) 64-bit (glnxa64)
February 11, 2014
To get started, type one of these: helpwin, helpdesk, or demo.
For product information, visit www.mathworks.com.
 
pc =
 
Local Cluster
 
Properties:
 
Profile: local
Modified: false
Host: zorana01.rc.fas.harvard.edu
NumWorkers: 32
 
JobStorageLocation: /n/home06/pkrastev/.matlab/local_cluster_jobs/R2014a
RequiresMathWorksHostedLicensing: false
 
Associated Jobs:
 
Number Pending: 0
Number Queued: 0
Number Running: 0
Number Finished: 0
pc =
 
Local Cluster
 
Properties:
 
Profile: local
Modified: true
Host: zorana01.rc.fas.harvard.edu
NumWorkers: 32
 
JobStorageLocation: /scratch/pkrastev/15697660
RequiresMathWorksHostedLicensing: false
 
Associated Jobs:
 
Number Pending: 0
Number Queued: 0
Number Running: 0
Number Finished: 0
 
Starting parallel pool (parpool) using the 'local' profile ... connected to 8 workers.
 
ans =
 
Pool with properties:
 
Connected: true
NumWorkers: 8
Cluster: local
AttachedFiles: {}
IdleTimeout: 30 minute(s) (30 minutes remaining)
SpmdEnabled: true
 
The computed value of pi is 3.1408824.
The parallel Monte-Carlo method is executed in 13.61 seconds.

Further reading

MATLAB’s documentation on JobStorageLocation

]]>
12680
Using KNITRO with MATLAB https://docs.rc.fas.harvard.edu/kb/using-knitro-with-matlab/ Fri, 19 Oct 2012 16:56:56 +0000 http://rc.fas.harvard.edu/?page_id=7360 Introduction

KNITRO is a solver for non-liner optimization developed by ZIENA. This page provides information on how use the KNITRO solver with MATLAB on the FASRC cluster.

Using KNITRO

Currently, we have an active license for KNITRO version 9.1.0. The software is available with the knitro/9.1.0-fasrc01 module under LMOD and works with MATLAB version R2015a available with software module matlab/R2015a-fasrc01. Below is a quick illustration on how to use the solver interactively:

(1) Start an interactive bash shell:

[pkrastev@sa01 ~]$ salloc -p test -n 1 -t 30 --x11=first --mem=4000
[pkrastev@holy2a18308 ~]$

(2) Load appropriate software modules:

[pkrastev@holy2a18308 ~]$ module load matlab/R2015a-fasrc01
[pkrastev@holy2a18308 ~]$ module load knitro/9.1.0-fasrc01

(3) Start Matlab interactively and run a KNITRO test:

[pkrastev@holy2a18308 ~]$ matlab -nosplash -nojvm -nodesktop -nodisplay
< M A T L A B (R) >
Copyright 1984-2015 The MathWorks, Inc.
R2015a (8.5.0.197613) 64-bit (glnxa64)
February 12, 2015
For online documentation, see http://www.mathworks.com/support
For product information, visit www.mathworks.com.
Academic License
>> [x fval] = knitromatlab(@(x)cos(x),1)
======================================
Academic Ziena License (NOT FOR COMMERCIAL USE)
KNITRO 9.1.0
Ziena Optimization
======================================
KNITRO presolve eliminated 0 variables and 0 constraints.
algorithm: 1
gradopt: 4
hessopt: 2
honorbnds: 1
maxit: 10000
outlev: 1
par_concurrent_evals: 0
The problem is identified as unconstrained.
KNITRO changing bar_switchrule from AUTO to 1.
KNITRO changing bar_murule from AUTO to 4.
KNITRO changing bar_initpt from AUTO to 3.
KNITRO changing bar_penaltyrule from AUTO to 2.
KNITRO changing bar_penaltycons from AUTO to 1.
KNITRO changing bar_switchrule from AUTO to 1.
KNITRO changing linsolver from AUTO to 2.
Problem Characteristics
-----------------------
Objective goal: Minimize
Number of variables: 1
bounded below: 0
bounded above: 0
bounded below and above: 0
fixed: 0
free: 1
Number of constraints: 0
linear equalities: 0
nonlinear equalities: 0
linear inequalities: 0
nonlinear inequalities: 0
range: 0
Number of nonzeros in Jacobian: 0
Number of nonzeros in Hessian: 1
EXIT: Locally optimal solution found.
Final Statistics
----------------
Final objective value = -1.00000000000000e+00
Final feasibility error (abs / rel) = 0.00e+00 / 0.00e+00
Final optimality error (abs / rel) = 2.37e-09 / 2.37e-09
# of iterations = 7
# of CG iterations = 0
# of function evaluations = 20
# of gradient evaluations = 0
Total program time (secs) = 0.44920 ( 0.251 CPU time)
Time spent in evaluations (secs) = 0.41352
===============================================================================
x =
3.1416
fval =
-1.0000
>>
]]>
7360