c3.optimizers.modellearning
¶
Object that deals with the model learning.
Module Contents¶
- class c3.optimizers.modellearning.ModelLearning(sampling, batch_sizes, pmap, datafiles, dir_path=None, estimator=None, seqs_per_point=None, state_labels=None, callback_foms=[], algorithm=None, run_name=None, options={})[source]¶
Bases:
c3.optimizers.optimizer.Optimizer
Object that deals with the model learning.
- Parameters
dir_path (str) – Filepath to save results
sampling (str) – Sampling method from the sampling library
batch_sizes (list) – Number of points to select from each dataset
seqs_per_point (int) – Number of sequences that use the same parameter set
pmap (ParameterMap) – Identifiers for the parameter vector
state_labels (list) – Identifiers for the qubit subspaces
callback_foms (list) – Figures of merit to additionally compute and store
algorithm (callable) – From the algorithm library
run_name (str) – User specified name for the run, will be used as root folder
options (dict) – Options to be passed to the algorithm
- log_setup(self) None [source]¶
Create the folders to store data.
- Parameters
dir_path (str) – Filepath
run_name (str) – User specified name for the run
- read_data(self, datafiles: Dict[str, str]) None [source]¶
Open data files and read in experiment results.
- Parameters
datafiles (dict) – List of paths for files that contain learning data.
- select_from_data(self, batch_size) List[int] [source]¶
Select a subset of each dataset to compute the goal function on.
- Parameters
batch_size (int) – Number of points to select
- Returns
Indeces of the selected data points.
- Return type
list
- confirm(self) None [source]¶
Compute the validation set, i.e. the value of the goal function on all points of the dataset that were not used for learning.
- goal_run(self, current_params: tensorflow.constant) tensorflow.float64 [source]¶
Evaluate the figure of merit for the current model parameters.
- Parameters
current_params (tf.Tensor) – Current model parameters
- Returns
Figure of merit
- Return type
tf.float64
- goal_run_with_grad(self, current_params)[source]¶
Same as goal_run but with gradient. Very resource intensive. Unoptimized at the moment.
- replace_logdir(self, new_logdir)¶
Specify a new filepath to store the log.
- Parameters
new_logdir –
- set_created_by(self, config) None ¶
Store the config file location used to created this optimizer.
- load_best(self, init_point) None ¶
Load a previous parameter point to start the optimization from. Legacy wrapper. Method moved to Parametermap.
- Parameters
init_point (str) – File location of the initial point
- start_log(self) None ¶
Initialize the log with current time.
- end_log(self) None ¶
Finish the log by recording current time and total runtime.
- log_best_unitary(self) None ¶
Save the best unitary in the log.
- log_parameters(self) None ¶
Log the current status. Write parameters to log. Update the current best parameters. Call plotting functions as set up.
- lookup_gradient(self, x)¶
Return the stored gradient for a given parameter set.
- Parameters
x (np.array) – Parameter set.
- Returns
Value of the gradient.
- Return type
np.array
- fct_to_min(self, input_parameters: Union[numpy.ndarray, tensorflow.constant]) Union[numpy.ndarray, tensorflow.constant] ¶
Wrapper for the goal function.
- Parameters
input_parameters ([np.array, tf.constant]) – Vector of parameters in the optimizer friendly way.
- Returns
Value of the goal function. Float if input is np.array else tf.constant
- Return type
[np.ndarray, tf.constant]
- fct_to_min_autograd(self, x)¶
Wrapper for the goal function, including evaluation and storage of the gradient.
- Parameters
- xnp.array
Vector of parameters in the optimizer friendly way.
- float
Value of the goal function.