Let's go through a user workflow. The following is the outline after having a user login and password set up.
While Running the Following Commands, just take notice that there are additional parameters that can make finding and updating jobs and models faster and easier.
s.upload_data_set('Document.csv, Parameter Name, Parameter1, Parameter2')
s.run_train_a_model('ModelName, Parameter1, Parameter2 etc.. ')
A common error is to not 'Name' your model the first time you import a data set, you could end up with an unknown or it will give you a random string '23409u23409324'. But if you lookup the job_status you can identify this artifact_name and prevent over riding or having jobs overlap.
Our API has many useful methods and to make the most of your data by utilizing arguments.
Take a look at our entire API here: https://darwin-api.sparkcognition.com/v1