/scratch/${SLURMJOBID}/ is not really a directory but is a softlink to either /scr1/${SLURMJOBID}/ for $SLURMJOBID odd, or /scr2/${SLURMJOBID}/ for ${SLURMJOBID} even. In the future there may be more scratch drives. The separation and link of scratch partitions makes a failure of a scratch disk affect fewer jobs. We try to make the link from scr[1-2] to scratch on compute nodes in the job and on head nodes but it may not be present.
/localscratch/${SLURMJOB_ID}/ only exists on the first compute node of the job, not on the frontend. If you need to recover or view files from it, look in the job log for the name of the first compute node (such as c1502) and ssh to it from the login node.
The lifetime of /scratch and /local_scratch jobs after the job ends is intended to be a week, but can be less if the areas begin to fill and are erased. About 8 hours has been the worst case so far.