Patching Framework is a Hybris OOTB framework, which is designed to simplify releasing of new code on production (for example automation of impex execution after new code deployment). Hybris provides a set of interfaces and abstract classes, which allows to write “Patches”, where Patch itself is a Java class that is executed during a system initialization or an update.
OOTB Hybris provides simple implementation via @SyStemSetup
annotations (documentation). It is great to use in small teams, where you can manually control execution and implementation, but for bigger teams you need more control including possibility to control patch execution via project properties. For such cases, you need to adopt Patching Framework
. Main concepts of framework and internal implementation can be found in documentation.
In this article would be a showcase of commonly used adoption of Patching Framework
for teams of 50-100 developers.
First of all we need to implement Patch
interface. To do that we need provide implementation for StructureState
interface and Release
interface.
StructureState interface implementation
StructureState
interface is used to version folder structure of patches, so you can change patch folder structure and maintain backward compatibility.
In real life there is no sense to provide backward compatibility for patches due to using agile methodologies and moving only forward with releases. That means that we can use some dummy implementation.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
package com.blog.patches.structure;
import de.hybris.platform.patches.organisation.StructureState;
/**
* Enumeration that defines structure state; it may be used in different objects to indicate for which milestone given
* object was introduced.
*/
public enum FileStructureVersion implements StructureState {
V1;
@Override
public boolean isAfter(final StructureState structureState) {
if (this == structureState) {
return false;
}
for (final StructureState iterateValue : values()) {
if (structureState.equals(iterateValue)) {
return true;
}
if (this.equals(iterateValue)) {
return false;
}
}
return false;
}
}
|
Release interface implementation
Release
interface was designed to combine different changes under some release. Unfortunately, such approach means that you need to do code changes for each release on production, morover you need to do code changes if something was added/removed from release at the last minute. To get rid of such overhead there is a sense to not use SAP Commerce approach for releases and use abstraction of each individual ticket is a separate “release” in Hybris terms of usage. In such case there would be no needs in any code changes to add ore remove patches from releases.
1
2
3
4
5
6
7
8
9
10
11
12
|
private static class ReleasePerTicket implements Release {
private final String ticketId;
public ReleasePerTicket(String ticketId) {
this.ticketId = ticketId;
}
@Override
public String getReleaseId() {
return ticketId;
}
}
|
You can notice that class is defined as private static
without package
definition. It is due to such class beter to make private and place inside Patch
interface implementation. (Would be provided below)
Patch interface implementation
One of goals is to simplify usage of Patch system by developers. For that we would need to provide abstract implementation of Patch
interface and introduce naming convention. Main use case is to execute some ImpEx for some ticket, so our abstract implementation must simplify that as much as possible, also we would need functionality to execute custom java code, execute SQL queries and import ImpEx before or after java custom code.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
|
package com.blog.patches.patch;
import com.blog.patches.structure.FileStructureVersion;
import com.blog.patches.util.PatchImpexImportHelper;
import com.blog.patches.util.PatchSQLHelper;
import de.hybris.platform.patches.Patch;
import de.hybris.platform.patches.Release;
import de.hybris.platform.patches.organisation.StructureState;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.annotation.Resource;
public abstract class AbstractSimplifiedPatch implements Patch {
private static final Logger LOG = LoggerFactory.getLogger(AbstractSimplifiedPatch.class);
private static final String IMPORT_IMPEXES_BASE_FOLDER = "/blogpatches/patches/";
@Resource
private PatchImpexImportHelper patchImpexImportHelper;
@Resource
private PatchSQLHelper patchSQLHelper;
abstract String getTicketId();
protected void executeCustomJavaCode() {
// Used to be overridden in child classes.
}
@Override
public void createProjectData(StructureState structureState) {
LOG.info(getPatchLog("Start patch execution"));
importImpexes(PatchImpexImportType.PRE);
executeCustomJavaCode();
importImpexes(PatchImpexImportType.POST);
}
@Override
public String getPatchId() {
return getTicketId();
}
@Override
public String getPatchName() {
// This is used as a label in HAC
return getTicketId();
}
@Override
public String getPatchDescription() {
// ! Description is mandatory to persist PatchExecutionModel.
return getTicketId();
}
@Override
public Release getRelease() {
return new ReleasePerTicket(getTicketId());
}
@Override
public StructureState getStructureState() {
return FileStructureVersion.V1;
}
protected void executeSql(String sql) {
if (StringUtils.isBlank(sql)) {
LOG.warn(getPatchLog("Provided SQL is empty"));
return;
}
try {
Integer affectedRows = patchSQLHelper.execute(sql);
LOG.info(getPatchLog("SQL query executed"));
if (affectedRows != null) {
LOG.info(getPatchLog("Affected rows " + affectedRows));
}
} catch (Exception e) {
LOG.error(getPatchLog("SQL query failed"), e);
}
}
protected void importImpexesFromCustomFolder(String folderName) {
// In case of any impexes that does not fit in the import of impexes
// during pre/post code execution then this method can be called from
// the inherited class. The folder should still be present inside the
// patch base folder but can have a different name than the ticket Id.
final String customFolderPath = IMPORT_IMPEXES_BASE_FOLDER + folderName + "/";
boolean isFolderProcessed = patchImpexImportHelper.importImpexesFromFolder(customFolderPath);
if (!isFolderProcessed) {
LOG.warn(getPatchLog("No impexes were found in custom import folder"));
}
}
private void importImpexes(PatchImpexImportType impexImportType) {
LOG.info(getPatchLog("Start impex import for import type " + impexImportType));
String importFolder = resolveFolderPath(impexImportType);
boolean isFolderProcessed = patchImpexImportHelper.importImpexesFromFolder(importFolder);
if (!isFolderProcessed) {
LOG.warn(getPatchLog("No impexes were found for import type " + impexImportType));
}
}
private String resolveFolderPath(PatchImpexImportType importFolderType) {
switch (importFolderType) {
case PRE:
return IMPORT_IMPEXES_BASE_FOLDER + getTicketId() + "/";
case POST:
return IMPORT_IMPEXES_BASE_FOLDER + getTicketId() + "/post/";
default:
return null;
}
}
protected String getPatchLog(String logMessage) {
return getPatchId() + ": " + logMessage;
}
private enum PatchImpexImportType {
PRE, POST;
}
private static class ReleasePerTicket implements Release {
private final String ticketId;
public ReleasePerTicket(String ticketId) {
this.ticketId = ticketId;
}
@Override
public String getReleaseId() {
return ticketId;
}
}
}
|
Implementation above allows to have simple Patch implementation, like:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
package com.blog.patches.patch;
import org.springframework.stereotype.Component;
@Component
public class BP123 extends AbstractSimplifiedPatch {
@Override
String getTicketId() {
return "BP-123";
}
}
|
Where “BP-123” represents you Jira ticket ID. You can’t use “-” in Java class names, so Java class should be named, just “BP123”. Such implementation will execute ImpEx from “resource” folder under path “blogpatches/patches/BP-123”. So at the end developer must put ImpEx in extension resource folder and create a simple Java class.
Helper classes used in abstract patch implementation are attached at the end of the article.
Patch execution during System Update
To execute patches during System Update we need to create implementation of AbstractPatchesSystemSetup
and inject required patches. To decrease amount of manual work for releases and eliminate possible merge conflicts we will utilize Spring framework to Autowire our pathes.
First of all we to enable spring component scan, so we can utilize @Component
annotation in our Patch
implementations. For that add spring.xml configuration:
1
2
3
4
|
<context:annotation-config/>
<context:component-scan base-package="com.blog.patches.patch"/>
|
Now we can create implementation of AbstractPatchesSystemSetup
and automatically inject all our patches. For that we would utilize OOTB Spring Framework possibilities to autowire list of classes and @PostConstruct
to set patches after autowiring.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
|
package com.blog.patches.setup;
import com.blog.patches.constants.BlogpatchesConstants;
import com.blog.patches.patch.AbstractSimplifiedPatch;
import de.hybris.platform.core.initialization.SystemSetup;
import de.hybris.platform.core.initialization.SystemSetup.Process;
import de.hybris.platform.core.initialization.SystemSetup.Type;
import de.hybris.platform.core.initialization.SystemSetupContext;
import de.hybris.platform.core.initialization.SystemSetupParameter;
import de.hybris.platform.core.initialization.SystemSetupParameterMethod;
import de.hybris.platform.patches.AbstractPatchesSystemSetup;
import org.apache.commons.collections4.CollectionUtils;
import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import javax.annotation.PostConstruct;
import java.util.ArrayList;
import java.util.List;
import static org.apache.log4j.Logger.getLogger;
@SystemSetup(extension = BlogpatchesConstants.EXTENSIONNAME)
public class PatchesSystemSetup extends AbstractPatchesSystemSetup {
private static final Logger LOG = getLogger(PatchesSystemSetup.class);
@Autowired(required = false)
private List<AbstractSimplifiedPatch> patchList;
@PostConstruct
private void updatePatchesList() {
if (CollectionUtils.isEmpty(patchList)) {
LOG.warn("No patches found!");
} else {
LOG.info("Automatic patch injection for " + patchList.size() + " patches.");
super.setPatches(new ArrayList<>(patchList));
}
}
@Override
@SystemSetup(type = Type.ESSENTIAL, process = Process.UPDATE)
public void createEssentialData(final SystemSetupContext setupContext) {
super.createEssentialData(setupContext);
}
@Override
@SystemSetup(type = Type.PROJECT, process = Process.UPDATE)
public void createProjectData(final SystemSetupContext setupContext) {
super.createProjectData(setupContext);
}
@Override
@SystemSetupParameterMethod
public List<SystemSetupParameter> getInitializationOptions() {
return super.getInitializationOptions();
}
}
|
Also, we would need to create instance of that class in spring.xml configuration:
1
2
3
4
5
6
7
|
<bean class="com.blog.patches.setup.PatchesSystemSetup" parent="patchesSystemSetup">
<property name="patches">
<!-- Empty list is updated in java code. -->
<list/>
</property>
</bean>
|
Long-running data update execution from Patch
Another commonly required thing is to execute some long-running data migration after System Update. For that can be also utilized patch system. Main idea would be to create a CronJob to execute custom java code from instance of Patch class. For that one more abstract class would be created and implement JobPerformable<CronJobModel>
interface.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
|
package com.blog.patches.patch;
import com.blog.patches.util.DataMigrationPatchHelper;
import com.blog.patches.util.InitializationUtils;
import de.hybris.platform.cronjob.enums.CronJobResult;
import de.hybris.platform.cronjob.enums.CronJobStatus;
import de.hybris.platform.cronjob.model.CronJobModel;
import de.hybris.platform.patches.organisation.StructureState;
import de.hybris.platform.servicelayer.cronjob.JobPerformable;
import de.hybris.platform.servicelayer.cronjob.PerformResult;
import org.apache.log4j.Logger;
import javax.annotation.Resource;
import static org.apache.log4j.Logger.getLogger;
public abstract class AbstractDataMigrationPatch extends AbstractSimplifiedPatch implements JobPerformable<CronJobModel> {
private static final Logger LOG = getLogger(AbstractDataMigrationPatch.class);
@Resource
private DataMigrationPatchHelper dataMigrationPatchHelper;
abstract PerformResult migrateDataAfterDeploymentOnAllNodes(CronJobModel cronJobModel);
@Override
public void createProjectData(StructureState structureState) {
super.createProjectData(structureState);
dataMigrationPatchHelper.createPatchRunningCronjob(this);
LOG.info(getPatchLog("Data migration cronjob created"));
}
@Override
public PerformResult perform(CronJobModel cronJobModel) {
LOG.info(getPatchLog("Data migration cronjob started"));
PerformResult executionResult = migrateDataAfterDeploymentOnAllNodes(cronJobModel);
LOG.info(getPatchLog("Data migration finished with result " + executionResult.getResult()));
if (isFinishedSuccessfully(executionResult)) {
LOG.info(getPatchLog("Data migration patch clean up"));
dataMigrationPatchHelper.cleanUpCronjob(this);
}
return executionResult;
}
private static boolean isFinishedSuccessfully(PerformResult executionResult) {
return executionResult.getResult() == CronJobResult.SUCCESS && executionResult.getStatus() == CronJobStatus.FINISHED;
}
@Override
public boolean isPerformable() {
// We want to perform data migration after update process fully completed
return !InitializationUtils.isSystemUpdateOrInitInProgress();
}
@Override
public boolean isAbortable() {
return false;
}
}
|
Now we can create a Patch from abstract class and implement migrateDataAfterDeploymentOnAllNodes()
method, which will create a CronJob with Patch used as an execution bean. CronJob would be immediately scheduled with Trigger, but it won’t be executed until System Update will finish and latest code would be deployed on processing node. For data migrations, which takes less than 5 minutes, could be used executeCustomJavaCode()
method.
Improve backoffice experience for Patches with long-running data update
When checking or investigating issues with CronJobs
created by AbstractDataMigrationPatch
, it is necessary to manually determine the relationship between the patch and the cron job, which can be a lengthy and annoying process. To enhance the backoffice experience, it makes sense to extend the OOTB PatchExecution
with additional information and a link to the CronJob
.
First, the PatchExecution
should be extended to store additional data:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
<itemtype code="PatchExecution" autocreate="false" generate="false">
<attributes>
<attribute qualifier="implementationClass" type="java.lang.String">
<persistence type="property"/>
</attribute>
<attribute qualifier="patchType" type="java.lang.String">
<persistence type="property"/>
</attribute>
<attribute qualifier="dataMigrationJobCode" type="java.lang.String">
<persistence type="property"/>
</attribute>
</attributes>
</itemtype>
|
An indirect link to CronJob
is used via the string attribute dataMigrationJobCode
, as CronJob
can be created after the creation of PatchExecution
or removed after execution. To not lose this information indirect relation must be used.
implementationClass
and patchType
are useful for quickly distinguishing between patch types without looking into the source code.
To populate the new attributes, the DefaultPatchExecutionService
should be overridden:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
|
package com.blog.patches.service;
import com.blog.patches.patch.AbstractDataMigrationPatch;
import com.blog.patches.patch.AbstractSimplifiedPatch;
import com.blog.patches.util.DataMigrationPatchHelper;
import de.hybris.platform.patches.Patch;
import de.hybris.platform.patches.model.PatchExecutionModel;
import de.hybris.platform.patches.service.impl.DefaultPatchExecutionService;
import de.hybris.platform.servicelayer.model.ModelService;
import javax.annotation.Resource;
public class ExtendedPatchExecutionService extends DefaultPatchExecutionService {
@Resource
private ModelService modelService;
@Override
public PatchExecutionModel createPatchExecution(Patch patch) {
PatchExecutionModel patchModel = super.createPatchExecution(patch);
patchModel.setImplementationClass(patch.getClass().getSimpleName());
patchModel.setPatchType(resolvePatchType(patch));
patchModel.setDataMigrationJobCode(resolveDataMigrationJobCode(patch));
modelService.save(patchModel);
return patchModel;
}
private String resolvePatchType(Patch patch) {
if (patch instanceof AbstractDataMigrationPatch) {
return "AbstractDataMigrationPatch";
} else if (patch instanceof AbstractSimplifiedPatch) {
return "AbstractSimplifiedPatch";
}
return "Patch";
}
private String resolveDataMigrationJobCode(Patch patch) {
if (patch instanceof AbstractDataMigrationPatch) {
return DataMigrationPatchHelper.resolveCronJobCodeForPatch(patch);
}
return null;
}
}
|
The default implementation should be replaced in Spring:
1
2
3
4
|
<alias name="extendedPatchExecutionService" alias="patchExecutionService"/>
<bean id="extendedPatchExecutionService" parent="defaultPatchExecutionService"
class="com.blog.patches.service.ExtendedPatchExecutionService"/>
|
Finally, the backoffice configuration can be adjusted:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
<context type="PatchExecution" component="listview">
<list-view:list-view xmlns:list-view="http://www.hybris.com/cockpitng/component/listView">
<list-view:column qualifier="patchId"/>
<list-view:column qualifier="executionStatus"/>
<list-view:column qualifier="executionTime"/>
<list-view:column qualifier="implementationClass"/>
<list-view:column qualifier="patchType"/>
<list-view:column qualifier="rerunnable"/>
<list-view:column qualifier="creationtime"/>
</list-view:list-view>
</context>
<context type="PatchExecution" component="editor-area" merge-by="type">
<editorArea:editorArea xmlns:editorArea="http://www.hybris.com/cockpitng/component/editorArea">
<editorArea:tab name="blogbackoffice.PatchExecution.tab.implementation">
<editorArea:section name="hmc.properties">
<editorArea:attribute qualifier="implementationClass" readonly="true"/>
<editorArea:attribute qualifier="patchType" readonly="true"/>
<editorArea:attribute qualifier="dataMigrationJobCode" readonly="true"/>
</editorArea:section>
</editorArea:tab>
</editorArea:editorArea>
</context>
|
With this implementation, PatchExecution
in the backoffice will display the dataMigrationJobCode
string value. The article describes how to display related items for such attributes in the backoffice, allowing you to double-click and view the CronJob
instead of manually searching by code in the CronJob
section.
Auto execute patches on CCv2
To automatically execute patches during deployment it is enough to put a checkbox of corresponding extension (blogpatches
) in hac before executing system update.
For CCv2 deployments we need to adjust manifest.json
to instruct CCv2 on how to perform hac update operation. For that we should add configFile
property with location to additional file with settings for hac update. In example below is defined sysupdate.json
file inside in resources
folder of blogcore
extension.
1
2
3
4
5
6
7
8
|
{
"properties": [
{
"key": "configFile",
"value": "/opt/hybris/bin/custom/blogcore/resources/sysupdatejson/sysupdate.json"
}
]
}
|
And the example of system update configuration file (sysupdate.json
):
1
2
3
4
5
6
7
8
|
{
"init": "Go",
"initmethod": "update",
"essential": "true",
"localizetypes": "true",
"blogpatches_sample": "true",
"filteredPatches": "false"
}
|
For your particular project it is recommended to configure checkboxes on hac update page and dump configuration (check SAP documentation). Above file can be used as starting point.
How to create a Patch
- Create java class in com.blog.patches.patch package of blogpatches extension.
- Use Jira ticket ID as a class name (JVM doesn’t spport “-” in class names, so just skip it), for example BP123.
- Decorate the class with
@Component
annotation.
- Extend the class from
AbstractSimplifiedPatch
.
- Override
getTicketId
to return the corresponding Jira ticket number.
- It’s important that the ticket id matches the path to a folder with impex files: resources/blogpatches/patches/{ticket_number}.
To pre-execute impex in Patch
This always gets executed before the execution of the Java code, a.c.a pre-execution.
- Add impex files in resources/blogpatches/patches/{ticket_number}/ package of blogpatches extension.
- It could be several files in one folder but tied to one patch. For example, the patch class that returns “BP-123” for the method
getTicketId()
triggers execution of all ImpExes in the folder /resources/blogpatches/patches/BP-123/.
- The order of the ImpExes could be controlled with a prefix in filename as 0_, 1_, etc.
To execute custom java code in Patch
- Implement
executeCustomJavaCode()
To post-execute impex in Patch
This always gets executed after the execution of the Java code, a.c.a post-execution.
- Add impex files in resources/blogpatches/patches/{ticket_number}/post package of blogpatches extension.
- Note the /post folder after the {ticket_number}.
- It could be several files in the post folder but tied to the same patch. For example, the patch class that returns “BP-123” for the method getTicketId() triggers execution of all impexes in the folder /blogpatches/resources/blogpatches/patches/BP-123/post/
- The ImpExes are automatically imported once the Java code execution completes.
To execute long lasting data migration java code in Patch
- Implement
migrateDataAfterDeploymentOnAllNodes()
- It will create a CronJob with Patch used as an execuction bean
CronJob
would be immediately scheduled with Trigger, but it won’t be executed untill hac update will finish and latest code would be deployed on processing node.
- Check AbstractDataMigrationPatch#isPerformable and DataMigrationPatchHelper#createPatchRunningCronjob for more information
- Use
executeCustomJavaCode()
if data migration takes less than 5 minutes
Implementation summary
AbstractSimplifiedPatch implements Patch
which should be the parent for all patches. It’s introduced to simplify patch creation and unify patch structure, also it provides a possibility to import impex files and/or execute some custom java code.
AbstractDataMigrationPatch extends AbstractSimplifiedPatch
, which should be the parent for all patches, which requires long data migration after release. Data migration would be done in separate cronjob on processing node. Cronjob has protection to be executed only once after deploy of new code on processing node.
- Autowiring of all patches in patchList automatically via
@PostConstruct
in PatchesSystemSetup
- Component scan is configured for package ‘com.blog.patches.patch’ It allows introducing spring bean for patches with
@Component
annotation and avoiding merge conflict in spring.xml
FileStructureVersion
which is required to utilize Patching Framework without exception. There is no functional logic yet for it.
ReleasePerTicket
class provides abstraction to use each Jira ticket as a separate release in Hybris terms.
Helper classes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
|
package com.blog.patches.util;
import de.hybris.platform.core.model.ItemModel;
import de.hybris.platform.cronjob.model.CronJobModel;
import de.hybris.platform.cronjob.model.JobModel;
import de.hybris.platform.cronjob.model.TriggerModel;
import de.hybris.platform.patches.Patch;
import de.hybris.platform.servicelayer.cronjob.JobDao;
import de.hybris.platform.servicelayer.internal.model.ServicelayerJobModel;
import de.hybris.platform.servicelayer.model.ModelService;
import org.apache.commons.collections4.CollectionUtils;
import javax.annotation.Resource;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
public class DataMigrationPatchHelper {
@Resource
private ModelService modelService;
@Resource
private JobDao jobDao;
@Resource
private PatchCronJobHelper patchCronJobHelper;
public void createPatchRunningCronjob(Patch patch) {
// We are using hybris OOTB naming convention, where ServicelayerJobModel code and execution spring bean id
// are same as class name, which implements JobPerformable interface.
String jobCode = resolveJobCodeForPatch(patch);
boolean rerunnable = patch instanceof Rerunnable;
ServicelayerJobModel patchJob = patchCronJobHelper.createPatchJob(jobCode, jobCode, rerunnable);
String cronJobCode = resolveCronJobCodeForPatch(patch);
CronJobModel patchCronJob = patchCronJobHelper.createPatchCronJob(cronJobCode, patchJob, rerunnable);
TriggerModel patchCronJobTrigger = patchCronJobHelper.createRunNowTrigger(patchCronJob);
patchCronJob.setTriggers(List.of(patchCronJobTrigger));
modelService.saveAll(patchCronJob, patchJob, patchCronJobTrigger);
}
public void cleanUpCronjob(Patch patch) {
String jobCode = resolveJobCodeForPatch(patch);
List<JobModel> jobsForCleanUp = jobDao.findJobs(jobCode);
if (CollectionUtils.isEmpty(jobsForCleanUp)) {
return;
}
for (ItemModel itemModel : getItemsForCleanUp(jobsForCleanUp)) {
modelService.removeAll(itemModel);
}
}
public static String resolveCronJobCodeForPatch(Patch patch) {
return resolveJobCodeForPatch(patch) + "CronJob";
}
private static List<ItemModel> getItemsForCleanUp(List<JobModel> jobsForCleanUp) {
// We will remove only triggers to keep cronjob in DB for logs.
List<ItemModel> itemsForRemoval = new ArrayList<>();
for (JobModel jobModel : jobsForCleanUp) {
Collection<CronJobModel> cronJobs = jobModel.getCronJobs();
for (CronJobModel cronJob : cronJobs) {
itemsForRemoval.addAll(cronJob.getTriggers());
}
}
return itemsForRemoval;
}
private static String resolveJobCodeForPatch(Patch patch) {
return patch.getClass().getSimpleName();
}
}
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
|
package com.blog.patches.util;
import com.google.common.collect.Iterables;
import de.hybris.platform.cronjob.model.CronJobModel;
import de.hybris.platform.cronjob.model.JobModel;
import de.hybris.platform.cronjob.model.TriggerModel;
import de.hybris.platform.servicelayer.cronjob.CronJobDao;
import de.hybris.platform.servicelayer.cronjob.JobDao;
import de.hybris.platform.servicelayer.internal.model.ServicelayerJobModel;
import de.hybris.platform.servicelayer.model.ModelService;
import javax.annotation.Resource;
import java.util.Date;
import java.util.List;
public class PatchCronJobHelper {
@Resource
private ModelService modelService;
@Resource
private JobDao jobDao;
@Resource
private CronJobDao cronJobDao;
public ServicelayerJobModel createPatchJob(String jobCode, String executionSpringBeanID) {
// ServicelayerJob - executes code with spring bean id. Check source code for jalo classes Job and ServicelayerJob.
// - If bean ID is missing in application context, then cronjob would be automatically rescheduled
// if "retry" attribute is true.
// - SingleExecutable - will execute job only once by checking cronjob result. Rescheduling due
// to missing spring bean will not create cronjob result, so it suits to our use case.
// - Patches are created with Component annotation, so they will receive class name as bean ID,
// that's why we will use class name as a spring ID for job.
ServicelayerJobModel jobModel = getOrCreateJob(jobCode);
jobModel.setSpringId(executionSpringBeanID);
// Enable behavior required for Patch type of job
jobModel.setActive(true);
jobModel.setRetry(true);
jobModel.setSingleExecutable(true);
return jobModel;
}
public CronJobModel createPatchCronJob(String cronJobCode, ServicelayerJobModel jobModel) {
CronJobModel cronJobModel = getOrCreateCronJob(cronJobCode);
cronJobModel.setJob(jobModel);
// CronjobModel should have same values for retry etc as a job.
cronJobModel.setActive(true);
cronJobModel.setRetry(true);
cronJobModel.setSingleExecutable(true);
return cronJobModel;
}
public TriggerModel createRunNowTrigger(CronJobModel cronJob) {
TriggerModel trigger = modelService.create(TriggerModel.class);
trigger.setActive(true);
trigger.setActivationTime(new Date());
trigger.setCronJob(cronJob);
return trigger;
}
private ServicelayerJobModel getOrCreateJob(String jobCode) {
// System update automatically creates instances of ServiceLayerJobModel, so we need to fetch and reuse them.
// In case of we execute patch without system update we need to create ServiceLayerJob manually.
List<JobModel> jobs = jobDao.findJobs(jobCode);
JobModel foundJob = Iterables.getFirst(jobs, null);
if (foundJob != null) {
if (!(foundJob instanceof ServicelayerJobModel)) {
throw new IllegalStateException("Job " + jobCode + " has not supportable type. Remove it for fix.");
}
return (ServicelayerJobModel) foundJob;
}
return createJob(jobCode);
}
private ServicelayerJobModel createJob(String jobCode) {
ServicelayerJobModel jobModel = modelService.create(ServicelayerJobModel.class);
jobModel.setCode(jobCode);
return jobModel;
}
private CronJobModel getOrCreateCronJob(String cronjobCode) {
List<CronJobModel> cronJobs = cronJobDao.findCronJobs(cronjobCode);
CronJobModel foundCronJob = Iterables.getFirst(cronJobs, null);
if (foundCronJob != null) {
return foundCronJob;
}
return createCronJob(cronjobCode);
}
private CronJobModel createCronJob(String cronjobCode) {
CronJobModel cronJob = modelService.create(CronJobModel.class);
cronJob.setCode(cronjobCode);
return cronJob;
}
}
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
|
package com.blog.patches.util;
import de.hybris.platform.commerceservices.setup.SetupImpexService;
import javax.annotation.Resource;
import java.io.File;
import java.net.URL;
import java.util.Arrays;
import java.util.Comparator;
public class PatchImpexImportHelper {
private static final String IMPEX_EXT = ".impex";
@Resource
private SetupImpexService setupImpexService;
public boolean importImpexesFromFolder(String folderPath) {
File patchFolderWithImpexes = resolvePatchFolderWithImpexes(folderPath);
if (patchFolderWithImpexes == null) {
return false;
}
File[] files = patchFolderWithImpexes.listFiles();
if (files == null) {
return false;
}
Arrays.stream(files)
.filter(file -> file.getName().contains(IMPEX_EXT))
.sorted(Comparator.comparing(File::getName))
.forEach(file -> setupImpexService.importImpexFile(folderPath + file.getName(), false));
return true;
}
private File resolvePatchFolderWithImpexes(String folderPath) {
if (folderPath == null) {
return null;
}
URL patchFolderURL = this.getClass().getResource(folderPath);
if (patchFolderURL == null) {
return null;
}
File folderInFilesystem = new File(patchFolderURL.getPath());
if (!folderInFilesystem.exists()) {
return null;
}
File[] files = folderInFilesystem.listFiles();
if (files == null) {
return null;
}
long count = Arrays.stream(files)
.filter(file -> file.getName().contains(IMPEX_EXT))
.count();
return count > 0 ? folderInFilesystem : null;
}
}
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
|
package com.blog.patches.util;
import de.hybris.platform.tx.Transaction;
import org.springframework.dao.DataAccessException;
import org.springframework.jdbc.core.JdbcTemplate;
import javax.annotation.Resource;
import java.util.List;
public class PatchSQLHelper {
private static final List<String> DDL_STATEMENTS = List.of(
"CREATE",
"DROP",
"ALTER",
"TRUNCATE",
"COMMENT",
"RENAME"
);
@Resource
private JdbcTemplate jdbcTemplate;
public Integer execute(String sql) {
if (isDdlStatement(sql)) {
// Data Definition Language (DDL)
return executeDdlQuery(sql);
} else {
// Data Manipulation Language (DML) - SELECT, INSERT, DELETE, and UPDATE
return executeDmlQuery(sql);
}
}
private boolean isDdlStatement(String sql) {
String normalizedQuery = sql.trim().toUpperCase();
for (String ddlStatement : DDL_STATEMENTS) {
if (normalizedQuery.startsWith(ddlStatement)) {
return true;
}
}
return false;
}
private Integer executeDdlQuery(String sql) throws DataAccessException {
jdbcTemplate.execute(sql);
return null;
}
private Integer executeDmlQuery(String sql) throws DataAccessException {
boolean rollbackNeeded = false;
var tx = Transaction.current();
tx.begin();
try {
return jdbcTemplate.update(sql);
} catch (Exception e) {
rollbackNeeded = true;
throw e;
} finally {
if (rollbackNeeded || tx.isRollbackOnly()) {
tx.rollback();
} else {
tx.commit();
}
}
}
}
|