Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
88 commits
Select commit Hold shift + click to select a range
4c8c8d9
Initial primary storage pool plugin skeleton
Sep 30, 2025
bf838b3
Initial primary storage pool plugin skeleton - added license string
rajiv-jain-netapp Oct 8, 2025
b82fb40
Initial primary storage pool plugin skeleton - added license string +…
rajiv-jain-netapp Oct 8, 2025
1199d55
Initial primary storage pool plugin skeleton - added license string +…
rajiv-jain-netapp Oct 8, 2025
ebc3f00
Merge pull request #1 from NetApp/feature-CSTACKEX-24
rajiv-jain-netapp Oct 9, 2025
28faca1
Feignconfiguration and volume feignClient along with desired POJOs
rajiv-jain-netapp Oct 13, 2025
fe0f752
Feignconfiguration and volume feignClient along with desired POJOs
rajiv-jain-netapp Oct 13, 2025
35f1011
Revert "Feignconfiguration and volume feignClient along with desired …
rajiv-jain-netapp Oct 13, 2025
a03a2c4
Revert "Feignconfiguration and volume feignClient along with desired …
rajiv-jain-netapp Oct 13, 2025
2e0efe8
Feignconfiguration and volume feignClient along with desired POJOs
rajiv-jain-netapp Oct 13, 2025
a96eb9e
Feignconfiguration and volume feignClient along with desired POJOs
rajiv-jain-netapp Oct 13, 2025
0f09c5f
CSTACKEX-28: added copyright comment in the logger configuration
rajiv-jain-netapp Oct 13, 2025
2cc4b0c
CSTACKEX-28: added newline in the end of the file
rajiv-jain-netapp Oct 13, 2025
033c23d
CSTACKEX-28 - incorporated review comments
rajiv-jain-netapp Oct 14, 2025
2822946
Merge pull request #4 from NetApp/feature/CSTACKEX-28
rajiv-jain-netapp Oct 14, 2025
f7b837a
Cluster, SVM and Aggr Feign Client
suryag1201 Oct 16, 2025
c585299
NAS and Job Feign Client
suryag1201 Oct 17, 2025
a492797
CSTACKEX-30 SAN Feign Client (#8)
suryag1201 Oct 22, 2025
25353c2
CSTACKEX-7: ONTAP Primary storage pool (#9)
sandeeplocharla Oct 24, 2025
973f5e2
CSTACKEX-34: Upgrade to framework classes design
rajiv-jain-netapp Oct 28, 2025
686a892
CSTACKEX-34: incorporating the review comments
rajiv-jain-netapp Oct 28, 2025
edfcdde
CSTACKEX-34: transient changes to the review comments
rajiv-jain-netapp Oct 28, 2025
73eb9f5
CSTACKEX-34: Unable to get checkstyle pass hence fixing this as well
rajiv-jain-netapp Oct 28, 2025
465fffe
CSTACKEX-34: further review comments incorporations
rajiv-jain-netapp Oct 28, 2025
3d6bd91
CSTACKEX-34: addressing checkstyle issues
rajiv-jain-netapp Oct 28, 2025
5815ebd
CSTACKEX-34: fix checksyle issues
rajiv-jain-netapp Oct 28, 2025
618f957
Merge pull request #13 from NetApp/feature/CSTACKEX-34
rajiv-jain-netapp Oct 28, 2025
6c4b24e
CSTACKEX-35 Create Async (#14)
suryag1201 Oct 29, 2025
1b0c7f7
Feature/cstackex-01: Primary Storage pool creation
sandeeplocharla Nov 5, 2025
54ddfa9
Merge branch 'apache:main' into main
rajiv-jain-netapp Nov 25, 2025
b23ac40
CSTACKEX-50: Disable, Re-Enable, Delete Storage pool and Enter, Exit …
sandeeplocharla Dec 8, 2025
2c61e76
Feature/cstackex 22: Shared NFS pool and volume creation - Approach 1…
piyush5netapp Dec 8, 2025
e99b98e
feature/CSTACKEX-65: Aggregate selection logic for creating ONTAP Vol…
sandeeplocharla Jan 13, 2026
ef0354a
feature/CSTACKEX-77: added first junit for lifecycle.initialize mthod…
rajiv-jain-netapp Jan 14, 2026
2f02d8a
Delete export policy NFS for the storage pool (#23)
piyush5netapp Jan 19, 2026
1ae738b
Merge pull request #25 from NetApp/feature/CSTACKEX-77
rajiv-jain-netapp Jan 20, 2026
890c2db
UTs for NFS storage pool creation code (#29)
piyush5netapp Feb 2, 2026
2d3b279
Merge branch 'apache:main' into main
rajiv-jain-netapp Feb 4, 2026
8a2c7fb
Feature/cstackex 88 - Storage Pool operation code changes and UTs (#30)
piyush5netapp Feb 6, 2026
b26542f
CSTACKEX-46: Create, Delete iSCSI type Cloudstack volumes, Enter, Can…
sandeeplocharla Feb 11, 2026
856c5cc
Feature/cstackex 112 (#33)
suryag1201 Feb 11, 2026
7c2b229
CSTACKEX-112 Struct Security Issue
Feb 13, 2026
763aa3b
Feature/cstackex 117 (#34)
suryag1201 Feb 13, 2026
eace4ee
CSTACKEX-114: Delete volume or qcow2 file NFS (#32)
piyush5netapp Feb 17, 2026
f42552b
CSTACKEX-18_2: NFS3 snapshot changes
rajiv-jain-netapp Feb 19, 2026
8894248
CSTACK-18_2: fixing junit dependent changes
rajiv-jain-netapp Feb 19, 2026
3f0019a
STACK-18_2: fixes
rajiv-jain-netapp Feb 20, 2026
9b79f46
CSTACKEX-18_2: adding VM snapshot logic
rajiv-jain-netapp Feb 20, 2026
7a0d61e
CSTACKEX-18_2: fix junit issues
rajiv-jain-netapp Feb 20, 2026
7c3419e
CSTACKEX-18_2: fixes for vm snapshot workflow
rajiv-jain-netapp Feb 21, 2026
d2b6a27
CSTACKEX-18_2: fixing the behaviour for the VM level snapshot when qu…
rajiv-jain-netapp Feb 21, 2026
c5d5428
CSTACKEX-18_2: incorporating the review comments.
rajiv-jain-netapp Feb 24, 2026
3f18c11
CSTACKEX-18_2: transient fixes post incorporating the comments
rajiv-jain-netapp Feb 24, 2026
723561b
CSTACKEX-18_2: Incorporate review comments
rajiv-jain-netapp Feb 24, 2026
09968db
CSTACKEX-18_2: quiecing VM would be done based on user input for VM l…
rajiv-jain-netapp Feb 25, 2026
0a1a9c4
CSTACKEX-18_2: ONTAP plugin can not handle memory snapshot with stora…
rajiv-jain-netapp Feb 26, 2026
49df4c3
CSTACKEX-18_2: junit fix
rajiv-jain-netapp Feb 26, 2026
776b9a2
CSTACKEX-18_2: junit fix2
rajiv-jain-netapp Feb 26, 2026
c04e223
CSTACKEX-18_2: ensure that ONTAP volume related calls are served by c…
rajiv-jain-netapp Feb 26, 2026
186e59b
CSTACKEX-18_2: using flexvolume snapshot to get snapshot workflows fo…
rajiv-jain-netapp Feb 26, 2026
1020a2c
CSTACKEX-18_2: using flexvolume snapshot even for CS volume snapshot …
rajiv-jain-netapp Feb 27, 2026
672d7a4
CSTACKEX-18_2: we are taking snapshot for volume with flexvolume snap…
rajiv-jain-netapp Feb 28, 2026
9c63c61
CSTACKEX-18_2: junit fixes with recent refactor
rajiv-jain-netapp Feb 28, 2026
79730ed
CSTACKEX-18_2: fixing snapshot delete condition fix
rajiv-jain-netapp Mar 2, 2026
ae96e9b
CSTACKEX-18_2: delete snapshot should be done over plugin path not on…
rajiv-jain-netapp Mar 2, 2026
1b0bba9
CSTACKEX-18_2: plugin has to consider VM for snapshot in running and …
rajiv-jain-netapp Mar 2, 2026
7780a93
CSTACKEX-18_2: revert snapshot fixes for API not found
rajiv-jain-netapp Mar 2, 2026
5bff41f
CSTACKEC-18_2: revertsnapshot workflow using private cli REST endpoint
rajiv-jain-netapp Mar 4, 2026
2abbed6
CSTACKEX-18_2: taking snapshot with memory option set as true
rajiv-jain-netapp Mar 5, 2026
2340400
CSTACKEX-18_2: add exception handling for any error coming from agent
rajiv-jain-netapp Mar 5, 2026
ce93705
CSTACKEX-18_2: reverting memory snapshot workflow and eroring out for…
rajiv-jain-netapp Mar 5, 2026
e1a6465
CSTACKEX-18_2: revert of memory snapshot implementation
rajiv-jain-netapp Mar 5, 2026
9138e20
CSTACKEX-18_2: rollback the object creation in case of failures
rajiv-jain-netapp Mar 5, 2026
5f9e51c
CSTACKEX-18_2: rollback in case of any failures
rajiv-jain-netapp Mar 9, 2026
142e0e6
CSTACKEX-18_2: comments and some changes
rajiv-jain-netapp Mar 9, 2026
aa74a5a
CSTACKEX-18_2: checkstyle fixes
rajiv-jain-netapp Mar 9, 2026
55447b7
CSTACKEX-18_2: junit fixes
rajiv-jain-netapp Mar 9, 2026
fccaf83
Merge pull request #36 from NetApp/feature/CSTACKEX-18_2
rajiv-jain-netapp Mar 9, 2026
ea40967
feature/CSTACKEX-122: Per host Igroup changes (#37)
piyush5netapp Mar 16, 2026
a41eb28
bugfix/CSTACKEX-130: All VM becomes a dummy/zombie running vm without…
piyush5netapp Apr 13, 2026
7d08878
bugfix/CSTACKEX-143: Second VM creation creates a dummy running VM wi…
piyush5netapp Apr 13, 2026
4a62d40
bugfix/CSTACKEX-131: ISCSI VM created with small sized template which…
piyush5netapp Apr 14, 2026
a6e4b49
bugfix/CSTACKEX-135: added Netapp ontap screen during zone creation (…
piyush5netapp Apr 15, 2026
b58fd24
Merge branch 'main' of github.com:netapp/cloudstack into sync/apache-…
rajiv-jain-netapp Apr 20, 2026
0f5370a
Resolving conflicts from rebase
rajiv-jain-netapp Apr 20, 2026
ddb119f
conflicts are resolvd which are originated from rebase
rajiv-jain-netapp Apr 20, 2026
929d30f
Correction on conflict resolution
rajiv-jain-netapp Apr 21, 2026
a20b6cc
Correction on merge conflicts for constants
rajiv-jain-netapp Apr 21, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,8 @@ public class KvmFileBasedStorageVmSnapshotStrategy extends StorageVMSnapshotStra

private static final List<Storage.StoragePoolType> supportedStoragePoolTypes = List.of(Storage.StoragePoolType.Filesystem, Storage.StoragePoolType.NetworkFilesystem, Storage.StoragePoolType.SharedMountPoint);

private static final String ONTAP_PROVIDER_NAME = "NetApp ONTAP";

@Inject
protected SnapshotDataStoreDao snapshotDataStoreDao;

Expand Down Expand Up @@ -325,6 +327,11 @@ public StrategyPriority canHandle(Long vmId, Long rootPoolId, boolean snapshotMe
List<VolumeVO> volumes = volumeDao.findByInstance(vmId);
for (VolumeVO volume : volumes) {
StoragePoolVO storagePoolVO = storagePool.findById(volume.getPoolId());
if (storagePoolVO.isManaged() && ONTAP_PROVIDER_NAME.equals(storagePoolVO.getStorageProviderName())) {
logger.debug(String.format("%s as the VM has a volume on ONTAP managed storage pool [%s]. " +
"ONTAP managed storage has its own dedicated VM snapshot strategy.", cantHandleLog, storagePoolVO.getName()));
return StrategyPriority.CANT_HANDLE;
}
if (!supportedStoragePoolTypes.contains(storagePoolVO.getPoolType())) {
logger.debug(String.format("%s as the VM has a volume that is in a storage with unsupported type [%s].", cantHandleLog, storagePoolVO.getPoolType()));
return StrategyPriority.CANT_HANDLE;
Expand Down Expand Up @@ -503,8 +510,9 @@ protected VMSnapshot takeVmSnapshotInternal(VMSnapshot vmSnapshot, Map<VolumeInf
return processCreateVmSnapshotAnswer(vmSnapshot, volumeInfoToSnapshotObjectMap, createDiskOnlyVMSnapshotAnswer, userVm, vmSnapshotVO, virtualSize, parentSnapshotVo);
}

logger.error("Disk-only VM snapshot for VM [{}] failed{}.", userVm.getUuid(), answer != null ? " due to" + answer.getDetails() : "");
throw new CloudRuntimeException(String.format("Disk-only VM snapshot for VM [%s] failed.", userVm.getUuid()));
String details = answer != null ? answer.getDetails() : String.format("No answer received from host [%s]. The host may be unreachable.", hostId);
logger.error("Disk-only VM snapshot for VM [{}] failed due to: {}.", userVm.getUuid(), details);
throw new CloudRuntimeException(String.format("Disk-only VM snapshot for VM [%s] failed due to: %s.", userVm.getUuid(), details));
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1340,6 +1340,15 @@ private void createManagedVolumeCopyTemplateAsync(VolumeInfo volumeInfo, Primary
primaryDataStore.setDetails(details);

grantAccess(volumeInfo, destHost, primaryDataStore);
volumeInfo = volFactory.getVolume(volumeInfo.getId(), primaryDataStore);
// For Netapp ONTAP iscsiName or Lun path is available only after grantAccess
String managedStoreTarget = volumeInfo.get_iScsiName() != null ? volumeInfo.get_iScsiName() : volumeInfo.getUuid();
details.put(PrimaryDataStore.MANAGED_STORE_TARGET, managedStoreTarget);
primaryDataStore.setDetails(details);
// Update destTemplateInfo with the iSCSI path from volumeInfo
if (destTemplateInfo instanceof TemplateObject) {
((TemplateObject)destTemplateInfo).setInstallPath(volumeInfo.getPath());
}

try {
motionSrv.copyAsync(srcTemplateInfo, destTemplateInfo, destHost, caller);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,10 @@ protected Answer takeDiskOnlyVmSnapshotOfRunningVm(CreateDiskOnlyVmSnapshotComma
return new CreateDiskOnlyVmSnapshotAnswer(cmd, false, errorMsg, null);
}
return new CreateDiskOnlyVmSnapshotAnswer(cmd, false, e.getMessage(), null);
} catch (Exception e) {
String errorMsg = String.format("Creation of disk-only VM snapshot for VM [%s] failed due to %s.", vmName, e.getMessage());
logger.error(errorMsg, e);
return new CreateDiskOnlyVmSnapshotAnswer(cmd, false, errorMsg, null);
} finally {
if (dm != null) {
try {
Expand Down Expand Up @@ -146,21 +150,13 @@ protected Answer takeDiskOnlyVmSnapshotOfStoppedVm(CreateDiskOnlyVmSnapshotComma
}
} catch (LibvirtException | QemuImgException e) {
logger.error("Exception while creating disk-only VM snapshot for VM [{}]. Deleting leftover deltas.", vmName, e);
for (VolumeObjectTO volumeObjectTO : volumeObjectTos) {
Pair<Long, String> volSizeAndNewPath = mapVolumeToSnapshotSizeAndNewVolumePath.get(volumeObjectTO.getUuid());
PrimaryDataStoreTO primaryDataStoreTO = (PrimaryDataStoreTO) volumeObjectTO.getDataStore();
KVMStoragePool kvmStoragePool = storagePoolMgr.getStoragePool(primaryDataStoreTO.getPoolType(), primaryDataStoreTO.getUuid());

if (volSizeAndNewPath == null) {
continue;
}
try {
Files.deleteIfExists(Path.of(kvmStoragePool.getLocalPathFor(volSizeAndNewPath.second())));
} catch (IOException ex) {
logger.warn("Tried to delete leftover snapshot at [{}] failed.", volSizeAndNewPath.second(), ex);
}
}
cleanupLeftoverDeltas(volumeObjectTos, mapVolumeToSnapshotSizeAndNewVolumePath, storagePoolMgr);
return new Answer(cmd, e);
} catch (Exception e) {
logger.error("Unexpected exception while creating disk-only VM snapshot for VM [{}]. Deleting leftover deltas.", vmName, e);
cleanupLeftoverDeltas(volumeObjectTos, mapVolumeToSnapshotSizeAndNewVolumePath, storagePoolMgr);
return new CreateDiskOnlyVmSnapshotAnswer(cmd, false,
String.format("Creation of disk-only VM snapshot for VM [%s] failed due to %s.", vmName, e.getMessage()), null);
}

return new CreateDiskOnlyVmSnapshotAnswer(cmd, true, null, mapVolumeToSnapshotSizeAndNewVolumePath);
Expand Down Expand Up @@ -192,6 +188,23 @@ protected Pair<String, Map<String, Pair<Long, String>>> createSnapshotXmlAndNewV
return new Pair<>(snapshotXml, volumeObjectToNewPathMap);
}

protected void cleanupLeftoverDeltas(List<VolumeObjectTO> volumeObjectTos, Map<String, Pair<Long, String>> mapVolumeToSnapshotSizeAndNewVolumePath, KVMStoragePoolManager storagePoolMgr) {
for (VolumeObjectTO volumeObjectTO : volumeObjectTos) {
Pair<Long, String> volSizeAndNewPath = mapVolumeToSnapshotSizeAndNewVolumePath.get(volumeObjectTO.getUuid());
PrimaryDataStoreTO primaryDataStoreTO = (PrimaryDataStoreTO) volumeObjectTO.getDataStore();
KVMStoragePool kvmStoragePool = storagePoolMgr.getStoragePool(primaryDataStoreTO.getPoolType(), primaryDataStoreTO.getUuid());

if (volSizeAndNewPath == null) {
continue;
}
try {
Files.deleteIfExists(Path.of(kvmStoragePool.getLocalPathFor(volSizeAndNewPath.second())));
} catch (IOException ex) {
logger.warn("Tried to delete leftover snapshot at [{}] failed.", volSizeAndNewPath.second(), ex);
}
}
}

protected long getFileSize(String path) {
return new File(path).length();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.nio.file.Files;
import java.nio.file.Paths;

import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
Expand Down Expand Up @@ -96,10 +98,15 @@ public boolean connectPhysicalDisk(String volumeUuid, KVMStoragePool pool, Map<S
String result = iScsiAdmCmd.execute();

if (result != null) {
logger.debug("Failed to add iSCSI target " + volumeUuid);
System.out.println("Failed to add iSCSI target " + volumeUuid);
// Node record may already exist from a previous run; accept and proceed
if (isNonFatalNodeCreate(result)) {
logger.debug("iSCSI node already exists for {}@{}:{}, proceeding", getIqn(volumeUuid), pool.getSourceHost(), pool.getSourcePort());
} else {
logger.debug("Failed to add iSCSI target " + volumeUuid);
System.out.println("Failed to add iSCSI target " + volumeUuid);

return false;
return false;
}
} else {
logger.debug("Successfully added iSCSI target " + volumeUuid);
System.out.println("Successfully added to iSCSI target " + volumeUuid);
Expand All @@ -123,21 +130,39 @@ public boolean connectPhysicalDisk(String volumeUuid, KVMStoragePool pool, Map<S
}
}

// ex. sudo iscsiadm -m node -T iqn.2012-03.com.test:volume1 -p 192.168.233.10:3260 --login
iScsiAdmCmd = new Script(true, "iscsiadm", 0, logger);
final String host = pool.getSourceHost();
final int port = pool.getSourcePort();
final String iqn = getIqn(volumeUuid);

// Always try to login; treat benign outcomes as success (idempotent)
iScsiAdmCmd = new Script(true, "iscsiadm", 0, logger);
iScsiAdmCmd.add("-m", "node");
iScsiAdmCmd.add("-T", getIqn(volumeUuid));
iScsiAdmCmd.add("-p", pool.getSourceHost() + ":" + pool.getSourcePort());
iScsiAdmCmd.add("-T", iqn);
iScsiAdmCmd.add("-p", host + ":" + port);
iScsiAdmCmd.add("--login");

result = iScsiAdmCmd.execute();

if (result != null) {
logger.debug("Failed to log in to iSCSI target " + volumeUuid);
System.out.println("Failed to log in to iSCSI target " + volumeUuid);
if (isNonFatalLogin(result)) {
logger.debug("iSCSI login returned benign message for {}@{}:{}: {}", iqn, host, port, result);
// Session already exists — a newly mapped LUN won't be visible until
// the kernel's next periodic SCSI scan (~30-60s).
Script rescanCmd = new Script(true, "iscsiadm", 0, logger);
rescanCmd.add("-m", "session");
rescanCmd.add("--rescan");
String rescanResult = rescanCmd.execute();
if (rescanResult != null) {
logger.warn("iSCSI session rescan returned: {}", rescanResult);
} else {
logger.debug("iSCSI session rescan completed successfully for {}@{}:{}", iqn, host, port);
}
} else {
logger.debug("Failed to log in to iSCSI target " + volumeUuid + ": " + result);
System.out.println("Failed to log in to iSCSI target " + volumeUuid);

return false;
return false;
}
} else {
logger.debug("Successfully logged in to iSCSI target " + volumeUuid);
System.out.println("Successfully logged in to iSCSI target " + volumeUuid);
Expand All @@ -158,8 +183,23 @@ public boolean connectPhysicalDisk(String volumeUuid, KVMStoragePool pool, Map<S
return true;
}

// Removed sessionExists() call to avoid noisy sudo/iscsiadm session queries that may fail on some setups

private boolean isNonFatalLogin(String result) {
if (result == null) return true;
String msg = result.toLowerCase();
// Accept messages where the session already exists
return msg.contains("already present") || msg.contains("already logged in") || msg.contains("session exists");
}

private boolean isNonFatalNodeCreate(String result) {
if (result == null) return true;
String msg = result.toLowerCase();
return msg.contains("already exists") || msg.contains("database exists") || msg.contains("exists");
}

private void waitForDiskToBecomeAvailable(String volumeUuid, KVMStoragePool pool) {
int numberOfTries = 10;
int numberOfTries = 30;
int timeBetweenTries = 1000;

while (getPhysicalDisk(volumeUuid, pool).getSize() == 0 && numberOfTries > 0) {
Expand Down Expand Up @@ -238,6 +278,15 @@ public KVMPhysicalDisk getPhysicalDisk(String volumeUuid, KVMStoragePool pool) {
}

private long getDeviceSize(String deviceByPath) {
try {
if (!Files.exists(Paths.get(deviceByPath))) {
logger.debug("Device by-path does not exist yet: " + deviceByPath);
return 0L;
}
} catch (Exception ignore) {
// If FS check fails for any reason, fall back to blockdev call
}

Script iScsiAdmCmd = new Script(true, "blockdev", 0, logger);

iScsiAdmCmd.add("--getsize64", deviceByPath);
Expand Down Expand Up @@ -280,8 +329,47 @@ private String getComponent(String path, int index) {
return tmp[index].trim();
}

/**
* Check if there are other LUNs on the same iSCSI target (IQN) that are still
* visible as block devices. This is needed because ONTAP uses a single IQN per
* SVM — logging out of the target would kill ALL LUNs, not just the one being
* disconnected.
*
* Checks /dev/disk/by-path/ for symlinks matching the same host:port + IQN but
* with a different LUN number.
*/
private boolean hasOtherActiveLuns(String host, int port, String iqn, String lun) {
String prefix = "ip-" + host + ":" + port + "-iscsi-" + iqn + "-lun-";
java.io.File byPathDir = new java.io.File("/dev/disk/by-path");
if (!byPathDir.exists() || !byPathDir.isDirectory()) {
return false;
}
java.io.File[] entries = byPathDir.listFiles();
if (entries == null) {
return false;
}
for (java.io.File entry : entries) {
String name = entry.getName();
if (name.startsWith(prefix) && !name.equals(prefix + lun)) {
logger.debug("Found other active LUN on same target: " + name);
return true;
}
}
return false;
}

private boolean disconnectPhysicalDisk(String host, int port, String iqn, String lun) {
// use iscsiadm to log out of the iSCSI target and un-discover it
// Check if other LUNs on the same IQN target are still in use.
// ONTAP (and similar) uses a single IQN per SVM with multiple LUNs.
// Doing iscsiadm --logout tears down the ENTIRE target session,
// which would destroy access to ALL LUNs — not just the one being disconnected.
if (hasOtherActiveLuns(host, port, iqn, lun)) {
logger.info("Skipping iSCSI logout for /" + iqn + "/" + lun +
" — other LUNs on the same target are still active");
return true;
}

// No other LUNs active on this target — safe to logout and delete the node record.

// ex. sudo iscsiadm -m node -T iqn.2012-03.com.test:volume1 -p 192.168.233.10:3260 --logout
Script iScsiAdmCmd = new Script(true, "iscsiadm", 0, logger);
Expand Down Expand Up @@ -422,6 +510,19 @@ public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk srcDisk, String destVolu
try {
QemuImg q = new QemuImg(timeout);
q.convert(srcFile, destFile);
// Below fix is required when vendor depends on host based copy rather than storage CAN_CREATE_VOLUME_FROM_VOLUME capability
// When host based template copy is triggered , small size template sits in RAM(depending on host memory and RAM) and copy is marked successful and by the time flush to storage is triggered
// disconnectPhysicalDisk would disconnect the lun , hence template staying in RAM is not copied to storage lun. Below does flushing of data to storage and marking
// copy as successful once flush is complete.
Script flushCmd = new Script(true, "blockdev", 0, logger);
flushCmd.add("--flushbufs", destDisk.getPath());
String flushResult = flushCmd.execute();
if (flushResult != null) {
logger.warn("iSCSI copyPhysicalDisk: blockdev --flushbufs returned: {}", flushResult);
}
Script syncCmd = new Script(true, "sync", 0, logger);
syncCmd.execute();
logger.info("iSCSI copyPhysicalDisk: flush/sync completed ");
} catch (QemuImgException | LibvirtException ex) {
String msg = "Failed to copy data from " + srcDisk.getPath() + " to " +
destDisk.getPath() + ". The error was the following: " + ex.getMessage();
Expand Down
14 changes: 14 additions & 0 deletions plugins/storage/volume/ontap/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@
<junit-jupiter.version>5.8.1</junit-jupiter.version>
<mockito.version>3.12.4</mockito.version>
<mockito-junit-jupiter.version>5.2.0</mockito-junit-jupiter.version>
<byte-buddy-agent.version>1.11.13</byte-buddy-agent.version>
</properties>
<dependencyManagement>
<dependencies>
Expand Down Expand Up @@ -121,12 +122,24 @@
<version>${mockito.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>net.bytebuddy</groupId>
<artifactId>byte-buddy-agent</artifactId>
<version>${byte-buddy-agent.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.assertj</groupId>
<artifactId>assertj-core</artifactId>
<version>${assertj.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine-storage-snapshot</artifactId>
<version>4.23.0.0-SNAPSHOT</version>
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dependency on cloud-engine-storage-snapshot is pinned to 4.23.0.0-SNAPSHOT while the rest of the module uses ${project.version}. This makes backports/version bumps harder and can break builds if the parent version changes. Prefer ${project.version} (or rely on dependencyManagement) for intra-repo artifacts.

Suggested change
<version>4.23.0.0-SNAPSHOT</version>
<version>${project.version}</version>

Copilot uses AI. Check for mistakes.
<scope>compile</scope>
</dependency>
</dependencies>
<repositories>
<repository>
Expand All @@ -151,6 +164,7 @@
<version>${maven-surefire-plugin.version}</version>
<configuration>
<skipTests>false</skipTests>
<argLine>-javaagent:${settings.localRepository}/net/bytebuddy/byte-buddy-agent/${byte-buddy-agent.version}/byte-buddy-agent-${byte-buddy-agent.version}.jar</argLine>
<includes>
<include>**/*Test.java</include>
</includes>
Expand Down
Loading
Loading