

Disk Full Error When running Azure ML Jobs using Custom Environemnts from Devops.
I get a disk full error while running a Model training job using Azure ML SDK launched from Azure DevOps. I created a custom environment inside the Azure ML Workspace and used it.
I am using azure CLI tasks in Azure DevOps to launch these training jobs. How can I resolve the disk full issue?
Error Message shown in the DevOps Training Online Task:
"error": { "code": "UserError", "message": "{\"Compliant\":\"Disk full while running job. Please consider reducing amount of data accessed, or upgrading VM SKU. Total space: 14045 MB, available space: 1103 MB.\"}\n{\n \"code\": \"DiskFullError\",\n \"target\": \"\",\n \"category\": \"UserError\",\n \"error_details\": []\n}", "messageParameters": {}, "details": [] },
The .runconfig file for the training job:
framework: Python script: cnn_training.py communicator: None autoPrepareEnvironment: true maxRunDurationSeconds: nodeCount: 1 environment: name: cnn_training python: userManagedDependencies: true interpreterPath: python docker: enabled: true baseImage: 54646eeace594cf19143dad3c7f31661.azurecr.io/azureml/azureml_b17300b63a1c2abb86b2e774835153ee sharedVolumes: true gpuSupport: false shmSize: 2g arguments: [] history: outputCollection: true snapshotProject: true directoriesToWatch: - logs dataReferences: workspaceblobstore: dataStoreName: workspaceblobstore pathOnDataStore: dataname mode: download overwrite: true pathOnCompute:
Is there an additional configuration to be done for the disk full issue? Any Changes to be made in the .runconfig file?
Python


I haven't actually played this game. But based on what I playing play snake, I can see why it's become addicting. A lot of players like the whole last survivor type of game. They enjoy the bragging rights and whatever adrenaline rush you get from playing it with friends and family.
Login to add comment