Class: DataManager

DataManager(args)

SOLVED A BIG PROBLEM I HAD HERE. DON'T PROVIDE A GENERIC TYPE. WILL ALLOW OTHERS TO OVERRIDE WITHOUT ISSUES So, leave as <>. Correct in generic build pipeline worker Or, pass any (*). BETTER. Allows generic templating in jsdocs.

Constructor

new DataManager(args)

Parameters:
Name Type Description
args DataManagerConstructorArgs.<M, E>
Source:

Members

dataOperationsOverrideBehavior

Source:

dataOperationsRecords

Using stack so later can debug order of requests
Source:

deleteDataPipeline :DeleteDataPipelineWorker.<M>

Type:
Source:

loadNewDataPipeline :LoadNewDataPipelineWorker.<M>

Type:
Source:

masterWorkingModel

Source:

serverSideDataLoadPipeline

Deprecated:
  • - new hydration logic
Source:

serverSideOptions

Source:

updateDataPipeline :UpdateDataPipelineWorker.<M>

Type:
Source:

uploadDataPipeline :UploadDataPipelineWorker.<M>

Type:
Source:

viewManagersPendingInit

Source:

(static) _ARRAY_SELF_TYPE

Source:

(static) _CANCELLED_DATA_OP

Source:

(static) _MODEL_ROOT_SCOPE

Source:

(static) _NESTED_SCOPE_KEY_SPLITTER

Source:

(static) _SCOPED_ARRAY_LITERAL

Source:

(static) _SERVER_SIDE_DATA_ATTRS

Deprecated:
  • Yes
Source:

(static) _SERVER_SIDE_PASSED

Source:

Methods

bulkCreateModels()

Source:

commitBulkModels()

Source:

commitModel()

CHECK WARNING AT COMMIT So, if working on whole model, to update well, we need an algo change Need to go nest deep. Based on committed Change values only given explicitly in temp recursive on keys going one level deep, node (no child), go through all keys for it, then return for the parent to finish. So, depth-first search?*
Source:

comparator()

Source:

createAndCommitModel()

Source:

createModel()

Creates a new model and commits new data to it
Source:

dataLength()

Source:

deleteCompleteModel(modelId)

Parameters:
Name Type Description
modelId string
Source:

deleteData()

Source:

flushAllData()

Source:

flushModelTemp()

Source:

flushScopedData()

Passing MODEL_ROOT for modelId and scope will flush all data Else, a modelId MUST have a valid scope Cancels ALL data operations for the given scope
Source:

getAllViewManagersForScope(scope)

Parameters:
Name Type Description
scope
Source:

getDe_MappedOperationScope(mappedScope)

Demaps a scope. mappedDataId goes LAST in mapping, after _FOR_MAPPED_ keyword.
Parameters:
Name Type Description
mappedScope string
Source:

getMappedDataOperationScope(scope, mappedDataId) → {string}

Parameters:
Name Type Description
scope ReqScope
mappedDataId string
Source:
Returns:
Type
string

getModel()

Returns copy of stored model (COMMITTED) Copy to avoid external manipulation not using data manager channels
Source:

getModelId()

Source:

getModelInIndex()

Change to getModelInPosition to rhyme with queue?
Source:

getNewDataScopedToRequest()

Source:

getOrderedArrayIndicesForMappedDataId(scope, mappedDataId)

Parameters:
Name Type Description
scope
mappedDataId string
Source:

getScopedModel()

Source:

getScopedModelFromRef()

ACCESS THIS IF AND ONLY IF YOU'VE REDUCED YOUR SCOPE TO THE DATA SPACE
Source:

getValidDataOperationsStack(recordsStamp, buildNewopt)

Parameters:
Name Type Attributes Description
recordsStamp string
buildNew boolean <optional>
Source:
Returns:

getViewManager()

Source:

hasData()

Source:

informObserversOfMutationState(scope, cb)

Parameters:
Name Type Description
scope
cb
Source:

(async) initDataManagerServerSide()

Called to initialize server-side data for the data manager
Source:

initViewManagersInWait()

Deprecated:
  • Yes
Source:

loadData()

Source:

(async) loadServerSideData() → {Promise.<void>}

Deprecated:
  • PUT IN PIPELINE THAT CAN BE CANCELLED
Source:
Returns:
Type
Promise.<void>

mergeScopedDataToModel()

CONFIRM THE ALGO
Source:

onPostDataOperation()

Source:

overwriteModel()

Overwrite temp in model to new value
Source:

preProcessDataOperation()

FINISH CANCELLATIONS REJECT IF NEW OPERATION PARENT OR CHILD IN SCOPE TO RUNNING (SO LET INTERNAL ONES FINISH FIRST AS PER MODEL OBJECT) As per data operations override behavior
Source:

recursiveValueReference(scope, prevKey, orderedArrayIndices, referencedModel, mainModel, stopAtNodeforRef) → {ValueTypeOfNested.<MainModel, ReqScope>|ValueTypeOfArrayOnly.<M, ReqScope>}

Use this to recursively reference a model and its type. Not as straightforward especially when working with arrays TO MAKE WORK EASIER, MANAGER HOLDS REFERENCE TO WHERE CHANGE SHOULD HAPPEN. CHANGES THAT TO NEW VALUE. BETTER SOLUTION TBF IF OBJECT, MERGE. IF LITERAL, OVERWRITE. YES.
Parameters:
Name Type Description
scope ReqScope
prevKey string
orderedArrayIndices ViewManagerOrderedArrayIndices This provided by view managers. Can infer its parent for index of its model id to create right order. So, keeping child list which can also be used to invoke a new build for recyclable lists
referencedModel ValueTypeOfNested.<MainModel, ReqScope>
mainModel MainModel
stopAtNodeforRef boolean
Source:
Returns:
//Returning value type of array too, if deconstructed
Type
ValueTypeOfNested.<MainModel, ReqScope> | ValueTypeOfArrayOnly.<M, ReqScope>

reduceModelToProperties()

If it return undefined, this property doesn't exist in target model
Source:

requestDataUpload(modelID, options, scope, newData, mappedDataId, reqAddr, requestedMutation, overrideNetworkInterface, overrideNetworkInterfaceScopeopt)

Parameters:
Name Type Attributes Description
modelID string
options SendDataOptions
scope ReqScope
newData ValueTypeOfNested.<M, ReqScope>
mappedDataId string
reqAddr string
requestedMutation DataManagerMutations
overrideNetworkInterface
overrideNetworkInterfaceScope NestedParentKeysOf.<M, ReqScope> <optional>
Source:
Returns:

(async) runDataMutation()

A helper method to ensure all mutations follow a basic or given flow for mutation execution. Help homogenize future updates, and better trace mutation errors, on a global scope FOR SCOPE: Always ensure it is the original scope to ensure data integrity passes are done well Now including mappedDataId per scope to cover for individual array changes, and make them asynchronous
Source:
To Do:
  • ABOVE HAS A PROBLEM cause we can't trace a mapped data id to an individual view manager to avoid side effects. However, I have a solution. For later (use uniqueIds - auto generated) Because of the override scope and how it affects data access, onDataLoadPostProcess can allow you to sort of commit extra data to this scope. However, code disallows this by using original scope (before, found it out by bug where MODEL_ROOT scope temp was null. Thus merge to old failed.) Now, algo STRICTLY commit to original scope. So, allow free use of override to avoid writing same API options severally. For extra data you want to commit that comes from server, silent update once

setDataDeleteAPI()

For data deletes
Source:

setDataLoadAPI()

For data loads
Source:

setDataUpdateAPI()

For data updates
Source:

setDataUploadAPI()

For data uploads
Source:

setDataWatcher()

Source:

setScopedAPIDataOpInterfaceObserver()

Source:

setScopedAPIOptions()

Source:

setViewManager()

Children scope for view also updated. YESSS Okay, no. Might be separating some concerns. Fire only relevant scope.
Source:

silentUpdateModel()

Directly commits - but ensure integrity despite being non-network?. So, run well
Source:

spawnPartialShellModel()

Creates a partial shell model of the target reference model, fulfilling the scope from root Added targetReferenceModel because spread operator is breaking with nested objects
Source:

updateModel()

overrideNetworkInterface allows developer to pass data in more specific scope, but have it balooned to the one provided network interface scope. Just makes life easier.
Source:

updateWatchers(mutation, scope, newData, oldData)

Parameters:
Name Type Description
mutation DataManagerMutations
scope S
newData
oldData
Source:

uploadDataInModel()

Source:

uploadNewData()

Uploading data NOT in model using the upload data API address - READ MORE Update spec to loadData, such that for scope lower than MODEL_ROOT, we don't create a new model. instead, we'll just update the existing
Source:

valueBasedRecursiveObjectMerge(oldModel, newModel, orderedArrayIndices, afterFirstRunopt)

ONLY CALL FOR MATCHING OBJECT TYPES, WHERE YOU'RE USING SPREAD OPERATOR TO MERGE VALUES AND PROPERTIES Merges new model into old model
Parameters:
Name Type Attributes Description
oldModel object
newModel object
orderedArrayIndices QueueInstance.<number>
afterFirstRun boolean <optional>
Source: