At its core, the NUCLE.AI block verification process takes the form of solving a classic machine learning optimization problem.
At the start of each verification (mining) cycle, the entire NUCLE.AI network is supplied with a batch of patient data encrypted using a structure preserving map as a training set along with a training label specifications.
The training models that crowdsourced data analysts use may be recurrent neural networks (RNN), topic modeling (such as LDA), or graphical models depending on the variables of interest within the dataset provided.
Each node of the NUCLE.AI network will then train their models on the provided training set during a predetermined training period. The exact duration of the training period will be determined by the load on the network, the number of nodes available, as well as the properties of variables being optimized over.
After the training period, each participating node will publish its model securely (via implementation of buffer times to avoid copy-optimizers) along with an explicit declaration of the block that the node is elongating.
Immediately following this declaration period, the NUCLE.AI network will be supplied with a validation set. Each active node will then run its trained models with the newly supplied validation set as the input, and publish their results.
The node with the model that produces the optimal sets of values as the output from the validation set will be awarded with network tokens (see §4.1, token economics), and have temporary agency over which token transactions and network information to append to the growing chain of verified blocks.
The step-by-step block generation protocol is outlined in §3.4.4.