Disk Full Error When running Azure ML Jobs using Custom Environemnts from Devops.

I get a disk full error while running a Model training job using Azure ML SDK launched from Azure DevOps. I created a custom environment inside the Azure ML Workspace and used it.

I am using azure CLI tasks in Azure DevOps to launch these training jobs. How can I resolve the disk full issue?

Error Message shown in the DevOps Training Online Task:

"error": {
        "code": "UserError",
        "message": "{\"Compliant\":\"Disk full while running job. Please consider reducing amount of data accessed, or upgrading VM SKU. Total space: 14045 MB, available space: 1103 MB.\"}\n{\n  \"code\": \"DiskFullError\",\n  \"target\": \"\",\n  \"category\": \"UserError\",\n  \"error_details\": []\n}",
        "messageParameters": {},
        "details": []

The .runconfig file for the training job:

 framework: Python
 script: cnn_training.py
 communicator: None
 autoPrepareEnvironment: true
 nodeCount: 1
   name: cnn_training
     userManagedDependencies: true
     interpreterPath: python
     enabled: true
     baseImage: 54646eeace594cf19143dad3c7f31661.azurecr.io/azureml/azureml_b17300b63a1c2abb86b2e774835153ee
     sharedVolumes: true
     gpuSupport: false
     shmSize: 2g
     arguments: []
   outputCollection: true
   snapshotProject: true
   - logs
     dataStoreName: workspaceblobstore
     pathOnDataStore: dataname
     mode: download
     overwrite: true

Is there an additional configuration to be done for the disk full issue? Any Changes to be made in the .runconfig file?

Python 24-11-22, 9:54 p.m. Nikhil6208
I haven't actually played this game. But based on what I playing play snake, I can see why it's become addicting. A lot of players like the whole last survivor type of game. They enjoy the bragging rights and whatever adrenaline rush you get from playing it with friends and family.
25-11-22, 12:25 p.m. run3donlineaz

I want to introduce you to one such web page. myestub my-estub myestub login
26-11-22, 11:08 a.m. Jacklin121

Log-in to answer to this question.