Feed aggregator

Getting Your Hands Dirty with TensorFlow 2.0 and Keras API

Andrejus Baranovski - Thu, 2019-10-31 15:00
Diving into technical details of the regression model creation with TensorFlow 2.0 and Keras API. In TensorFlow 2.0, Keras comes out of the box with TensorFlow library. API is simplified and more convenient to use.


Read the complete article here.

AEM Forms – Certify PDF end-up with NoSuchMethodError on bouncycastle

Yann Neuhaus - Thu, 2019-10-31 13:30

As part of an AEM project, we were working on setting up a few actions on PDF files. One of these actions was to Sign & Certify a PDF file. The basic Sign & Certify action provided by AEM is working easily by default but if you look deeper, you might get some surprise. The complexity in this case came from the fact that we absolutely needed the signature to contain a valid Time-Stamp using the Time-Stamp Protocol (TSP) as well as a valid Long-Term Validation (LTV). In this blog, I will talk about one (of the numerous) issue we faced that I believe is related only to AEM on WebLogic.

As I mentioned above, the basic Certify operation is working easily but if you do not take a closer look, it might not be TSP and/or LTV. In our case, using AEM 6.4 SP3 on WebLogic Server 12.2.1.3, we got the Certify operation to work but without TSP & LTV:

Certify PDF - TSP failed & LTV failed

Looking at the AEM Managed Server logs, you can see that the last line is an error message:

####<Aug 28, 2019 12:15:22,278 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '16' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-129013562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000055> <1566994522278> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>
####<Aug 28, 2019 12:15:25,025 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-129513562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994525025> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>
####<Aug 28, 2019 12:15:25,680 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994525680> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC008: Using the database to access and persist configuration properties.>
####<Aug 28, 2019 12:15:25,681 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994525681> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC001: The property LastCacheResetTime has been changed from  to 1555070921173>
####<Aug 28, 2019 12:15:25,681 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994525681> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC001: The property CacheValidationTime has been changed from 0 to 1555070921058>
####<Aug 28, 2019 12:15:25,684 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994525684> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 12:15:26,130 PM UTC> <Info> <Common> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-130E13562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994526130> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000628> <Created "1" resources for pool "IDP_DS", out of which "1" are available and "0" are unavailable.>
####<Aug 28, 2019 12:15:26,141 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994526141> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 12:15:26,147 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994526147> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 12:15:26,153 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994526153> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 12:15:26,158 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994526158> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 12:15:26,571 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994526571> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC008: Using the database to access and persist configuration properties.>
####<Aug 28, 2019 12:15:27,835 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '60' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-13A613562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000057> <1566994527835> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>
####<Aug 28, 2019 12:15:30,923 PM UTC> <Error> <com.adobe.workflow.AWS> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994530923> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <An exception was thrown with name java.lang.NoSuchMethodError message:org.bouncycastle.asn1.x509.AlgorithmIdentifier.getObjectId()Lorg/bouncycastle/asn1/ASN1ObjectIdentifier; while invoking service SignatureService and operation certify and no fault routes were found to be configured.>

 

At the same time, we also got this kind of messages:

ALC-DSC-003-000: com.adobe.idp.dsc.DSCInvocationException: Invocation error.
            at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:152)
            at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doRequired(EjbTransactionCMTAdapterBean.java:274)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
            at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doRequired(Unknown Source)
            at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:129)
            at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
            at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
            at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:93)
            at com.adobe.idp.dsc.provider.impl.vm.VMMessageDispatcher.doSend(VMMessageDispatcher.java:225)
            at com.adobe.idp.dsc.provider.impl.base.AbstractMessageDispatcher.send(AbstractMessageDispatcher.java:69)
            at com.adobe.idp.dsc.clientsdk.ServiceClient.invoke(ServiceClient.java:215)
            at com.adobe.workflow.engine.PEUtil.invokeAction(PEUtil.java:893)
            at com.adobe.idp.workflow.dsc.invoker.WorkflowDSCInvoker.transientInvoke(WorkflowDSCInvoker.java:356)
            at com.adobe.idp.workflow.dsc.invoker.WorkflowDSCInvoker.invoke(WorkflowDSCInvoker.java:159)
            at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doRequiresNew(EjbTransactionCMTAdapterBean.java:299)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
            at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doRequiresNew(Unknown Source)
            at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:143)
            at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
            at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
            at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.invoke(AbstractMessageReceiver.java:329)
            at com.adobe.idp.dsc.provider.impl.soap.axis.sdk.SoapSdkEndpoint.invokeCall(SoapSdkEndpoint.java:153)
            at com.adobe.idp.dsc.provider.impl.soap.axis.sdk.SoapSdkEndpoint.invoke(SoapSdkEndpoint.java:91)
            at sun.reflect.GeneratedMethodAccessor621.invoke(Unknown Source)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at org.apache.axis.providers.java.RPCProvider.invokeMethod(RPCProvider.java:397)
            at org.apache.axis.providers.java.RPCProvider.processMessage(RPCProvider.java:186)
            at org.apache.axis.providers.java.JavaProvider.invoke(JavaProvider.java:323)
            at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32)
            at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118)
            at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83)
            at org.apache.axis.handlers.soap.SOAPService.invoke(SOAPService.java:454)
            at org.apache.axis.server.AxisServer.invoke(AxisServer.java:281)
            at org.apache.axis.transport.http.AxisServlet.doPost(AxisServlet.java:699)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
            at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:327)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
            at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
            at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
            at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
            at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
            at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at com.adobe.idp.dsc.provider.impl.soap.axis.InvocationFilter.doFilter(InvocationFilter.java:43)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at com.adobe.idp.um.auth.filter.ParameterFilter.doFilter(ParameterFilter.java:105)
            at com.adobe.idp.um.auth.filter.CSRFFilter.invokeNextFilter(CSRFFilter.java:141)
            at com.adobe.idp.um.auth.filter.CSRFFilter.doFilter(CSRFFilter.java:132)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3706)
            at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3672)
            at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:328)
            at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
            at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
            at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
            at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
            at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
            at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
            at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
            at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
            at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
            at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
            at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
            at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
            at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
            at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:652)
            at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420)
            at weblogic.work.ExecuteThread.run(ExecuteThread.java:360)
Caused by: java.lang.NoSuchMethodError: org.bouncycastle.asn1.x509.AlgorithmIdentifier.getObjectId()Lorg/bouncycastle/asn1/ASN1ObjectIdentifier;
            at com.adobe.livecycle.signatures.pki.timestamp.TimestampInfoBC.matchesMessageImprint(TimestampInfoBC.java:187)
            at com.adobe.livecycle.signatures.pki.timestamp.TimestampToken.validateRequest(TimestampToken.java:430)
            at com.adobe.livecycle.signatures.pki.impl.PKIOperations.createTimestamp(PKIOperations.java:562)
            at com.adobe.livecycle.signatures.service.impl.TimeStampProviderImpl.getTimestampToken(TimeStampProviderImpl.java:85)
            at com.adobe.idp.cryptoprovider.LCPKCS7Signer$1.getActualAttributes(LCPKCS7Signer.java:256)
            at com.adobe.livecycle.signatures.pki.signature.CMSPKCS7Impl.sign(CMSPKCS7Impl.java:702)
            at com.adobe.livecycle.signatures.pki.impl.PKIOperations.sign(PKIOperations.java:345)
            at com.adobe.livecycle.signatures.service.cryptoprovider.DSSPKCS7Signer.signData(DSSPKCS7Signer.java:84)
            at com.adobe.idp.cryptoprovider.LCPKCS7Signer.sign(LCPKCS7Signer.java:123)
            at com.adobe.internal.pdftoolkit.services.digsig.digsigframework.impl.SignatureHandlerPPKLite.writeSignatureAfterSave(SignatureHandlerPPKLite.java:816)
            at com.adobe.internal.pdftoolkit.services.digsig.impl.SigningUtils.doSigning(SigningUtils.java:820)
            at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.certifyWrapperAPI(SignatureManager.java:1554)
            at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.certify(SignatureManager.java:1542)
            at com.adobe.livecycle.signatures.service.impl.SignCertifyImpl.certify(SignCertifyImpl.java:894)
            at com.adobe.livecycle.signatures.service.impl.DocumentSecurityService.certify(DocumentSecurityService.java:1644)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:118)
            ... 102 more

 

Based on the above messages, it is clear that there is a problem with some of the bouncycastle classes. This kind of thing is usually a missing class (“ClassNotFoundException“) or a conflict between two or more versions that are loaded by WebLogic (“NoSuchMethodError“) with the loaded/active version not containing the specific java method that is being called. We opened a SR with the Adobe Support (#188938) because this kind of thing shouldn’t be happening but after a few days without any meaningful update from them, I decided to look into the product myself to stop losing time on such trivial thing.

So this specific class (“org.bouncycastle.asn1.x509.AlgorithmIdentifier“) can be found in numerous jar files: apacheds*.jar, bcprov*.jar, bouncycastle*.jar, ec2*.jar, aso… I checked all these jar files on our WebLogic Server libraries as well as AEM ones and found what I believe was the issue: different versions of these jars being loaded. To confirm and before changing anything, I deployed the WebLogic CAT and found:

  • 0 conflicts in adobe-livecycle-cq-author.ear
  • 0 conflicts in adobe-livecycle-native-weblogic-x86_linux.ear
  • 5339 conflicts in adobe-livecycle-weblogic.ear

 
These numbers pretty much confirmed what I thought already. Going further, I found a few hundred conflicts related to the “org.bouncycastle.*” classes only. One of these being for the class “org.bouncycastle.asn1.x509.AlgorithmIdentifier” and it was conflicting between the following files:

  • WebLogic: $MW_HOME/oracle_common/modules/org.bouncycastle.bcprov-jdk15on.jar (1st loaded)
  • WebLogic: $MW_HOME/oracle_common/modules/org.bouncycastle.bcprov-ext-jdk15on.jar
  • AEM: $APPLICATIONS/adobe-livecycle-weblogic.ear/bcprov-151.jar

 
So what should be done to fix this? Well, a simple solution is just to force WebLogic to use the AEM provided files by default by updating the load preferences:

[weblogic@aem-node-1 ~]$ cd $APPLICATIONS
[weblogic@aem-node-1 AEM]$ 
[weblogic@aem-node-1 AEM]$ jar -xvf adobe-livecycle-weblogic.ear META-INF/weblogic-application.xml
[weblogic@aem-node-1 AEM]$ 
[weblogic@aem-node-1 AEM]$ grep -B1 "</prefer-application-packages>" META-INF/weblogic-application.xml
<package-name>org.mozilla.javascript.xmlimpl.*</package-name>
</prefer-application-packages>
[weblogic@aem-node-1 AEM]$ 
[weblogic@aem-node-1 AEM]$ sed -i 's,</prefer-application-packages>,<package-name>org.bouncycastle.*</package-name>\n&,' META-INF/weblogic-application.xml
[weblogic@aem-node-1 AEM]$ 
[weblogic@aem-node-1 AEM]$ grep -B2 "</prefer-application-packages>" META-INF/weblogic-application.xml
<package-name>org.mozilla.javascript.xmlimpl.*</package-name>
<package-name>org.bouncycastle.*</package-name>
</prefer-application-packages>
[weblogic@aem-node-1 AEM]$ 
[weblogic@aem-node-1 AEM]$ jar -uvf adobe-livecycle-weblogic.ear META-INF/weblogic-application.xml
[weblogic@aem-node-1 AEM]$ rm -rf META-INF
[weblogic@aem-node-1 AEM]$

 

What the above commands are doing is simply to add “<package-name>org.bouncycastle.*</package-name>” just before the end of the “<prefer-application-packages>” section so that WebLogic will know that it needs to use the AEM provided classes for this package and it shouldn’t use its own files. Once that is done, simply redeploy the EAR file. In my case, I was left with “only” 2442 conflicts, none regarding the bouncycastle (obviously).

After that, executing the same Certify action with the new classloader preferences resulted in no more errors:

####<Aug 28, 2019 1:12:22,359 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '109' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-1475E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000071> <1566997942359> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>
####<Aug 28, 2019 1:12:23,702 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-147BE729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997943702> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>
####<Aug 28, 2019 1:12:24,199 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944199> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC008: Using the database to access and persist configuration properties.>
####<Aug 28, 2019 1:12:24,199 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944199> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC001: The property LastCacheResetTime has been changed from  to 1555070921173>
####<Aug 28, 2019 1:12:24,200 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944200> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC001: The property CacheValidationTime has been changed from 0 to 1555070921058>
####<Aug 28, 2019 1:12:24,202 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944202> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 1:12:24,691 PM UTC> <Info> <Common> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14F2E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944691> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000628> <Created "1" resources for pool "IDP_DS", out of which "1" are available and "0" are unavailable.>
####<Aug 28, 2019 1:12:24,704 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944704> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 1:12:24,710 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944710> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 1:12:24,717 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944717> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 1:12:24,724 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944724> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 1:12:24,928 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944928> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC008: Using the database to access and persist configuration properties.>

 

The generated PDF now contained a correct Time-Stamp information but still not LTV information:

Certify PDF - TSP working & LTV failed

Finally, adding a Validation step after the Certify step in the process (in the AEM Application (LCA)) allowed both TSP and LTV information to be shown properly:

Certify PDF - TSP working & LTV working

Cet article AEM Forms – Certify PDF end-up with NoSuchMethodError on bouncycastle est apparu en premier sur Blog dbi services.

AEM Forms – WebLogic Clustering synch issue for Workbench 2-way-SSL

Yann Neuhaus - Thu, 2019-10-31 13:00

In a previous blog, I described the process to setup the AEM Forms to allow the AEM Workbench to connect to AEM using “2-way-SSL”. This setup is normally independent of the Application Server that you are using to host AEM. However, I already faced an issue (other than this one) which was caused by the 2-way-SSL setup for the Workbench in case of a WebLogic Cluster has been used to host AEM.

As mentioned in previous blog, I’m not an AEM expert but I know a few things about WebLogic so the idea here was to setup a fully functional WebLogic Cluster composed of two Managed Servers on two hosts/machines, test it properly and then install the AEM Forms application on top of it. Obviously, AEM Forms has been configured behind a Load Balancer for this purpose. At this point, AEM Forms was working perfectly in HA and stopping one of the nodes wasn’t a problem.

Then I configured the Workbench for 2-way-SSL and I did so while being connected to the AEM Node1 in Workbench, creating the Hybrid Domain in the AEM AdminUI Node1, aso… At the end of the setup, the AEM Workbench was working properly with the 2-way-SSL setup as well so it looked like the setup was completed. Just to be sure, I stopped the AEM Node1 and try to login to the AEM Workbench with the exact same parameters (same keystore, same truststore, same passwords) except for the target Server which I switched to the AEM Node2. Doing so, the login failed and I could see in the AEM Node2 Managed Server logs the following message:

####<Feb 12, 2019 2:14:46,277 PM UTC> <Info> <EJB> <aem-node-2> <msAEM-02> <[ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <81fe4dac-31f0-4c25-bf37-17d5b327a901-0000005e> <1549980886277> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-010227> <EJB exception occurred during invocation from home or business: com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionBMTAdapter_fw85em_Intf generated exception: ALC-DSC-124-000: com.adobe.idp.dsc.DSCAuthorizationException: User does not have the Service Read Permission.>

 
Just looking at this message, it’s clear that the user account that is working properly for the AEM Node1 isn’t working for the AEM Node2. After some investigation, it looked like the Hybrid Domain wasn’t shown on the AEM AdminUI Node2, for some reason… Both nodes are using the same Oracle Database and the same GDS (Global Document Storage) path so I thought that the issue might be related to a cache somewhere in AEM. Therefore, I thought about re-creating the Hybrid Domain but I just cancelled this move right away because I assume it could have bring me more trouble than solution (I didn’t want to create 2 objects with the same name, avoid corruption or whatever…):

  • Open the AEM AdminUI Node2 (Node1 is still down) (https://<AEM_HOST_2>:<AEM_PORT>/adminui)
  • Login with an administative account (E.g.: administrator)
  • Navigate to: Settings > User Management > Domain Management
    • -> Only 1 domain is displayed, the default one: DefaultDom
  • Click on “New Hybrid Domain
    • Click on “Cancel”

 
After doing that, the Hybrid Domain (the one created in this blog, named “SSLMutualAuthProvider“) magically appeared so I assume that it forced a synchronization and an update of the cache on the AEM Node2. Trying again a login to the AEM Workbench without changing the parameters printed the following on the AEM Node2 Managed Server logs:

####<Feb 12, 2019 2:30:43,208 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-2> <msAEM-02> <[ACTIVE] ExecuteThread: '117' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-24A18C6CA9D79C032EFA> <81fe4dac-31f0-4c25-bf37-17d5b327a901-00000067> <1549981843208> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>

 
The above message means that the login is successful and Workbench is able to load the data from AEM properly. I guess that there are other ways to fix this issue. There is a “Sync Now” as well as a “Refresh” button on the Domain Management page of the AdminUI so maybe this would have done the same thing and forced a synchronization… I must admit that I first thought about re-creating the Hybrid Domain but cancelled that and since it solved my issue, I couldn’t test more, unfortunately. A restart of the AEM Node2 is also sufficient to force a refresh but this takes a few minutes and it requires a downtime so it’s not ideal.

Cet article AEM Forms – WebLogic Clustering synch issue for Workbench 2-way-SSL est apparu en premier sur Blog dbi services.

IOT Hash

Jonathan Lewis - Thu, 2019-10-31 09:59

It’s another of my double-entendre titles. The optimizer can turn a hash join involving an index-organized table into a real performance disaster (though you may have to help it along the way by using a silly definition for your primary key columns). This post was inspired by a question posted on the Oracle Developer Community forum recently so the table and column names I’ve used in my model reflect (almost, with a few corrections) the names used in the post.

We start with a simple requirement expressed through the following SQL:


rem
rem     Script:         iot_hash.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

insert
        /*+
                qb_name(insert)
        */
into t_iot(
        id, inst_id, nr_time,
        o_time, st, null_col, svname
)
select
        /*+
                qb_name(main)
                unnest(@subq)
                leading(@sel$a93afaed apar@main ob@subq)
                use_hash(@sel$a93afaed ob@subq)
                swap_join_inputs(@sel$a93afaed ob@subq)
                index_ss_asc(@sel$a93afaed ob@subq (t_iot.st t_iot.inst_id t_iot.svname))a
        */
        apar.id,
        'UP',
        to_date('2019-10-24','yyyy-mm-dd'),
        to_date('1969-12-31','yyyy-mm-dd'),
        'IDLE',
        null,
        'tkt007.jj.bb.com'
from
        t_base apar
where
        apar.id not in (
                select
                        /*+
                                qb_name(subq)
                        */
                        id
                from
                        t_iot ob
                where
                        inst_id = 'UP'
        )
and     nvl(apar.gp_nm,'UA') = 'UA'
and     rownum <= 5000
/

The requirement is simple – insert into table t_iot a set of values dictated by a subset of the rows in table t_base if they do not already exist in t_iot. To model the issue that appeared I’ve had to hint the SQL to get the following plan (which I pulled from memory after enabling rowsource execution stats):


---------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |             |      1 |        |   296 (100)|      0 |00:00:00.03 |     788 |    148 |       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL | T_IOT       |      1 |        |            |      0 |00:00:00.03 |     788 |    148 |       |       |          |
|*  2 |   COUNT STOPKEY          |             |      1 |        |            |    100 |00:00:00.03 |      99 |     91 |       |       |          |
|*  3 |    HASH JOIN RIGHT ANTI  |             |      1 |    100 |   296   (2)|    100 |00:00:00.03 |      99 |     91 |    14M|  1843K|   15M (0)|
|*  4 |     INDEX SKIP SCAN      | T_IOT_STATE |      1 |  12614 |   102   (0)|  10000 |00:00:00.01 |      92 |     91 |       |       |          |
|*  5 |     TABLE ACCESS FULL    | T_BASE      |      1 |    100 |     2   (0)|    100 |00:00:00.01 |       7 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(ROWNUM<=5000)
   3 - access("APAR"."ID"="ID")
   4 - access("INST_ID"='UP')
       filter("INST_ID"='UP')
   5 - filter(NVL("APAR"."GP_NM",'UA')='UA')

The optimizer has unnested (as hinted) the subquery and converted it to an anti-join using a right hash anti-join. Take a look at the Used-mem for the hash join – would it surprise you to learn that the total size of the (not compressed in any way) IOT, and all its indexes, and the t_base table together total less than 4 MB. Something dramatically awful has happened in the hash join to generated a requirement of 14MB. (In the case of the OP this appeared as an unexpected 5GB written to the temporary tablespace.)

Before I address the source of the high memory usage, take a close look at the Predicate Information, particularly operation 3, and ask yourself what the definition of index t_iot_state might be. The predicate joins t_base.id to t_iot.id, and here’s the code to create both tables and all the indexes.

create table t_iot (
        nr_time         timestamp,
        id              varchar2(1024),
        inst_id         varchar2(200),
        o_time          timestamp,
        st              varchar2(200),
        null_col        varchar2(100),
        svname          varchar2(200),
        constraint t_iot_pk primary key(nr_time, id, inst_id)
)
organization index
/

insert into t_iot
select
        sysdate,
        dbms_random.string('l',10),
        'UP',
        sysdate,
        'IDLE',
        null,
        rpad('x',25,'x')
from
        all_objects
where
        rownum <= 1e4 -- > hint to avoid wordpress format issue
/

create index t_iot_state on t_iot(st, inst_id, svname); 
create index idx2        on t_iot(id, inst_id, svname);

create table t_base(
        id              varchar2(400) not null,
        gp_nm           varchar2(200)
)
/

insert into t_base
select
        dbms_random.string('l',10),
        'UA'
from
        all_objects
where
        rownum <= 100 -- > hint to avoid wordpress format issue
;


begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 't_iot',
                cascade     => true,
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 't_base',
                cascade     => true,
                method_opt  => 'for all columns size 1'
        );
end;
/


The index t_iot_state that Oracle has used in the hash join is defined on the columns (st, inst_id, svname) – so the predicate is doing a comparison with a column that’s not in the index! At least, it’s not visibly declared in the index; but this is a secondary index on an IOT, and IOTs don’t have “normal” rowids, the rowid in a secondary index is the value of the primary key (plus a “block guess”). So the columns in the index (even though not declared in the index) are: (st, inst_id, svname, {nr_time, id, inst_id, blockguess}). So this index does supply the required id column.

Side note: you’ll see in the list of columns above that inst_id appears twice. In fact (since Oracle 9, I think) the code to handle secondary indexes has been smart enough to avoid this duplication. If the secondary index contains columns from the primary key then the “rowid” doesn’t store those columns, the code knows how to construct the primaryh key value from the stored PK columns combined with the needed columns from the index entry. This can make IOTs a very nice choice of implementation for “intersection” tables that are used to represent many-to-many joins between two other tables.

Unfortunately this “rowid” is the explanation for the massive memory demand. Take a look at the “Column Projection Information” for my execution plan:


Column Projection Information (identified by operation id):
-----------------------------------------------------------
   2 - "APAR"."ID"[VARCHAR2,400], "APAR"."GP_NM"[VARCHAR2,200], ROWNUM[8]
   3 - (#keys=1) "APAR"."ID"[VARCHAR2,400], "APAR"."GP_NM"[VARCHAR2,200]
   4 - "OB".ROWID[ROWID,1249], "NR_TIME"[TIMESTAMP,11], "ID"[VARCHAR2,1024], "INST_ID"[VARCHAR2,200], "OB".ROWID[ROWID,1249]
   5 - "APAR"."ID"[VARCHAR2,400], "APAR"."GP_NM"[VARCHAR2,200]

The interesting line is operation 4. A hash join takes the rowsource from its first child (the build table) and creates an in-memory hash table (which may spill to disc, of course), so if I see an unreasonable memory allocation (or unexpected spill to disc) a good starting point is to look at what the first child is supplying. In this case the first child seems to be saying that it’s supplying (or allowing for) nearly 3,700 bytes to be passed up to the hash join.

On closer inspection we can see it’s reporting the “rowid” twice, and also reporting the three columns that make up that rowid. I think it’s reasonable to assume that it’s only supplying the rowid once, and maybe it’s not even supplying the other three columns because they are embedded in the rowid. Doing a quick arithmetic check, let’s multiply the size of the rowid by the value of A-rows: 1,249 * 10,000 = 12,490,000. That’s pretty close to the 14MB reported by the hash join in operation 3.

Hypothesis – to get at the id column, Oracle has used this index (actually a very bad choice of those available) to extract the rowid and then passed the rowid up to the parent in a (length padded) fixed format. Oracle has then created a hash table by extracting the id column from the rowid and building the hash table on it but also carrying the length-padded rowid into the hash table.  Possible variants on this theme are that some or all of the other columns in the Column Projection Information are also passed upwards so that the id doesn’t have to be extracted, but if they are they are not padded to their maximum length.

A simple test that this is broadly the right assumption is to re-run the model making the declared length of the rowid much larger to see what happens to the memory allocation. Changing the inst_id declaration from 200 bytes to 1000 bytes (note the stored value is only the 2 bytes needed for the value ‘UP’) the Used-mem jumps to 23 MB (which is an increment very similar to 800 * 10,000).  You’ll note that I chose to experiment with a column that wasn’t the column used in the join. It was a column in the secondary index definition, though, so another test would be to change the nr_time column from a timestamp (11 bytes) to a large varchar2, so I re-ran the test declaring the nr_time as a varchar2(1000) – reverting the inst_id to varchar2(200) – and the Used-mem increased to 25MB.

Preliminary Conclusion

If Oracle uses the contents of the rowid of a secondary index on an IOT in a join then it constructs a fixed format version for the rowid by padding every column in the primary key to its maximum length and concatenating the results. This can have catastrophic side effects on performance if you’ve declared some very long columns “just in case”. Any time you use index organized tables you should remember to check the Column Projection Information in any execution plans that use secondary indexes in case they are passing a large, padded, primary key through the plan to a point where a blocking operation (such as a hash join or merge join) has to accumulate a large number of rows.

Footnote

In my test case I had to hint the query heavily to force Oracle into the path I wanted to demonstrate.

It’s surprising that the optimizer should have chosen this path in the OP’s system, given that there’s another secondary index that contains the necessary columns in its definition. (So one thought is that there’s a statistics problem to address, or possibly the “good” index is subject to updates that make it become very inefficient (large) very quickly.)

Another oddity of the OP’s system was that Oracle should have chosen to do a right hash anti-join when it looked as if joining the tables in the opposite order would produce a much smaller memory demand and lower cost – there was an explict swap_join_inputs() hint in the Outline Information (so copying the outline into the query and changing that to no_swap_join_inputs() might have been abother viable workaround.) In the end the OP hinted the query to use a nested loop (anti-)join from t_base to t_iot – which is another way to avoid the hash table threat with padded rowids.

 

How to Record your Own Music: a Beginner’s Manual

VitalSoftTech - Thu, 2019-10-31 09:56

If you’re an aspiring musician or independent artist, you might be wondering how to record your own music at home through a DIY music production process. By doing this, you could potentially save a lot of money, time, and effort that goes into producing a song. In the 21st century, there’s an app available for […]

The post How to Record your Own Music: a Beginner’s Manual appeared first on VitalSoftTech.

Categories: DBA Blogs

Database Design Question

Tom Kyte - Thu, 2019-10-31 09:47
Hello, Ask Tom Team. I have a table A with columns (client_id,invoice_number,invoice_type) with a composite primary key (client_id,invoice_number). At business level there are 5 invoice_types (70,71,72,73,74). The invoice_number always brings the ...
Categories: DBA Blogs

finding out the source of the data in a table

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I am working on a database application, and i need to know how a table is being populated in the database schema. I have tried querying xxx_dependencies and xxx_source but of no use. I believe that this table might be populated from an...
Categories: DBA Blogs

How to change the plan of a query in execution??

Tom Kyte - Thu, 2019-10-31 09:47
So I often face an issue when a query generates an execution plan assuming wrong carnality and since it assumes it as 1 it goes into a M3RG3 CART3SIAN join. Now if I gather the stats the query does not automatically picks up the new plan. Is there a ...
Categories: DBA Blogs

List of event codes for traces

Tom Kyte - Thu, 2019-10-31 09:47
Hello Masters, Can you give me a link on docs.oracle.com where are listed all the system event codes for the trace like 10046, 10053... and, most important, with there signification? I know the 10046, 10053 codes but I am sure there are many ...
Categories: DBA Blogs

Performance of a VIEW on multiple tables (historical plus current)

Tom Kyte - Thu, 2019-10-31 09:47
Good afternoon, I have a customer that manage millions of data with a lot of tables. The thing is that we have a big table from 2013 until now which we would like to divided in two, one for the last three months and one for the rest (which we call...
Categories: DBA Blogs

Restrict Application access to developers in same workspace

Tom Kyte - Thu, 2019-10-31 09:47
How to show a user only certain amount of applications in app builder(i.e user should not be able to see all applications in app builder)? In a Oracle Apex workspace, I need to create a new user(admin role) such that the new user can only see sele...
Categories: DBA Blogs

Find if event spanning several dates happened via SQL

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I have data like below <code> event_flag event_date 1 date1 1 date2 0 date3 1 date4 0 date5 0 date6 1 date7 1 date8 1 date9 ...
Categories: DBA Blogs

pragma autonomous_transaction within procedure before creating synonyms

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I have created a stored procedure with in oracle package which creates list of synonyms depending upon the change of dblink server name. I need to execute this procedure to create/replace synonym pointing to another dblink server. My quest...
Categories: DBA Blogs

Undo Tablespaces.

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, Waiting to ask u this question. What is a Undo Tablespace in 9i. Is this similar to Rollback Segments. What are NonStandard Block sizes Why that non-Standarad. Why am i not able to create a RS on a Locally Managed Automatically Si...
Categories: DBA Blogs

How to count no of records in table without count?

Tom Kyte - Thu, 2019-10-31 09:47
2)How to count no of records in table without count? ---Actually,this question asked when i had attended an interview in Dell company.I don't know why they people are asked these type of questions,but i said an answer like in my own way. --->...
Categories: DBA Blogs

Use of Blockchain Helps Speed Global Shipping Transactions

Oracle Press Releases - Thu, 2019-10-31 06:00
Blog
Use of Blockchain Helps Speed Global Shipping Transactions

By Guest Author, Oracle—Oct 31, 2019

While ocean shipping has been steadily growing in recent years, multiple pressures are impacting the predictability of the far-flung supply chains and pressuring industry profits. Key to navigating these challenges for shippers, ocean carriers, terminal operators, and other parties involved with shipping ocean freight is better visibility and real-time access to information.

That’s a challenge given the sheer number of participants in a single shipment—and often leads to delays, disputes, and significant extra costs for shippers and carriers. But things are changing thanks to the help of emerging technology and the formation of a new blockchain consortium in this space pioneered by CargoSmart.

For the last year, CargoSmart and Oracle have been working together to develop a blockchain solution that aims to simplify the shipping documentation process and deliver a single source of truth for trusted, real-time sharing of information, thereby increasing trust and boosting efficiency.

More recently, CargoSmart announced a significant milestone in forming the Global Shipping Business Network (GSBN) blockchain consortium. Comprising of nine leading ocean carriers and terminal operators: CMA CGM, COSCO SHIPPING Lines, COSCO SHIPPING Ports, Hapag-Lloyd, Hutchison Ports, OOCL, Port of Qingdao, PSA International, and Shanghai International Port Group. This not-for-profit joint venture aims to accelerate the digital transformation of the shipping industry, with CargoSmart providing the resources underpinning it.

“The close cooperation between our R&D group and Oracle’s blockchain team has already helped to accelerate the development of some pilot applications. By leveraging Oracle’s enriched technical support and advice, CargoSmart has been able to achieve high levels of operational capability, reduce R&D time, and significantly improve the productivity of its blockchain application developers,” said Romney Wong, Chief Technology Officer of CargoSmart.

Watch the CargoSmart Video

Watch this video to learn more about how CargoSmart is leveraging Oracle Blockchain Platform to bring transparency and trust into the complex systems behind CargoSmart’s logistics and shipping management.

 


Read More Stories from Oracle Cloud

CargoSmart is one of the thousands of customers on its journey to the cloud. Read about others in Stories from Oracle Cloud: Business Successes

SLES15 SP1 – New features

Yann Neuhaus - Thu, 2019-10-31 05:20

With SLES15 SUSE introduced the Multimodal OS and the unified installer. Means, you only get what you really need. Your OS is flexible and you can easily add features you need and remove them as well. But this article shouldn’t be an explanation of the multimodal OS, it will show you some of the new features of SLES15 SP1.

SUSE supports the migration from SLES15 to SLES15 SP1 in online mode.
You can upgrade using two possibilities, YaST migration (GUI) and Zypper migration (command line).
Be sure that your system is registered at the SUSE Customer Center or has a local RMT server. Afterwards, just use “zypper migration”, type the number of the product you want to migrate and accept the terms of the license. That’s it.
The best way to check, if the installation was successful.

sles15:~ # cat /etc/os-release | grep PRETTY_NAME
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP1"

So let’s have a look at the new features and improvements of SLES15 SP1 .

Unified Installer

SUSE Manager Server and Proxy are now available as base products. Both can be installed using the unified installer.
Point of Service and SLE Real Time are also included in the unified installer now.

Transactional Update

In OpenSUSE Leap and SUSE CaaS transactional update was already implemented, now it is also possible to run transactional updates with SLE. To install transactional update, the Transactional Server Module needs to get activated first (no additional key is needed). Afterwards the transactional-update package and its dependencies can be installed.

sle15:~ #  SUSEConnect --product sle-module-transactional-server/15.1/x86_64
Registering system to SUSE Customer Center

Updating system details on https://scc.suse.com ...

Activating sle-module-transactional-server 15.1 x86_64 ...
-> Adding service to system ...
-> Installing release package ...

Successfully registered system
sle15:~ # zypper install transactional-update
Refreshing service 'Basesystem_Module_15_SP1_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_15_SP1_x86_64'.
Refreshing service 'Server_Applications_Module_15_SP1_x86_64'.
Refreshing service 'Transactional_Server_Module_15_SP1_x86_64'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following 6 NEW packages are going to be installed:
  attr bc openslp psmisc rsync transactional-update

6 new packages to install.
Overall download size: 686.6 KiB. Already cached: 0 B. After the operation, additional 1.3 MiB will be used.
Continue? [y/n/v/...? shows all options] (y): y

As you maybe know, SUSE uses btrfs with snapper as default for the file systems. This builds the basis for the transactional updates. Transactional updates are applied into a new snapshot, so the running system is not touched. Using this technology, the updated snapshot will be activated after the next reboot. So this is an update, that is
– Atomic: either fully applied or not.
– Easily revertabled: after a failed update the return to the previous (running) system is easy.

Simplified btrfs layout

There is only one single subvolume under /var not 10 for simplified and consistens snapshots. This takes only effect for fresh installations. Upgraded systems still use the old layout.
Startings with SLES15 SP1 there is also the possibility to have each home-directory as single subvolume. But this is not the default.

Secure encrypted virtualization (SEV)

Data encryption is a important topic in todays IT environments. Data stored on disk is widley encrypted, but how about the data in RAM? AMD’s SEV gives the opportunity to protect Linux KVM virtual machines by encrypting the memory of each VM with a unique key. It can also generate a signature, that attests the correct encryption.
This increases system security a lot and protects VM for memory scrape attachs from hypervisor.
With SLES15 SP1, Suse provides full support for this technology. For further information about SEV, click here .

Quarterly Updates

Starting with 15 SP1 SUSE offers quarterly updates of the installation and package media. They will be refreshed every quarter with all maintenance and security updates. SO for the setup of new systems there is always a recent and up-to-date state.

Conclusion

This is not the full list of new features, only an abstract. But nevertheless, especially the transactional update makes it effortable to upgrade to SLES15 SP1. And always think about the security improvements which come with every new release.

Cet article SLES15 SP1 – New features est apparu en premier sur Blog dbi services.

Square root in excel – A Step-By-Step Tutorial

VitalSoftTech - Wed, 2019-10-30 10:30

Have you ever stopped to wonder what life would have been like when there wasn’t a calculator to perform the arithmetical operations for people? It surely sends a shiver down your spine to even imagine the horror of having to survive without a calculator. But in the present times, it wouldn’t be wrong to state […]

The post Square root in excel – A Step-By-Step Tutorial appeared first on VitalSoftTech.

Categories: DBA Blogs

Patroni Operations – Changing Parameters

Yann Neuhaus - Wed, 2019-10-30 10:17

Sooner or later all of us have to change a parameter on the database. But how is this put into execution when using a Patroni cluster? Of course there are some specifics you have to consider.
This post will give you a short introduction into this topic.

When you want to change a parameter on a Patroni cluster you have several possibilities:
– Dynamic configuration in DCS. These changes are applied asynchronously to every node.
– Local configuration in patroni.yml. This will take precedence over the dynamic configuration.
– Cluster configuration using “alter system”.
– Environment configuration using local environment variables.

Change PostgreSQL parameters using patronictl 1. Change parameters, that do not need a restart

If you want to change a parameter (or more) for the whole cluster, you should use patronictl. If you want to change the initial configuration as well, you should also adjust patroni.yml.

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl edit-config PG1

All parameters already set are shown and can be changed like in any other file using the vi commands:

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl edit-config PG1

loop_wait: 10
maximum_lag_on_failover: 1048576
postgresql:
  parameters:
    archive_command: /bin/true
    archive_mode: 'on'
    autovacuum_max_workers: '6'
    autovacuum_vacuum_scale_factor: '0.1'
    autovacuum_vacuum_threshold: '50'
    client_min_messages: WARNING
    effective_cache_size: 512MB
    hot_standby: 'on'
    hot_standby_feedback: 'on'
    listen_addresses: '*'
    log_autovacuum_min_duration: 60s
    log_checkpoints: 'on'
    log_connections: 'on'
    log_directory: pg_log
    log_disconnections: 'on'
    log_duration: 'on'
    log_filename: postgresql-%a.log
    log_line_prefix: '%m - %l - %p - %h - %u@%d - %x'
    log_lock_waits: 'on'
    log_min_duration_statement: 30s
    log_min_error_statement: NOTICE
    log_min_messages: WARNING
    log_rotation_age: '1440'
    log_statement: ddl
    log_temp_files: '0'
    log_timezone: Europe/Zurich
    log_truncate_on_rotation: 'on'
    logging_collector: 'on'
    maintenance_work_mem: 64MB
    max_replication_slots: 10
    max_wal_senders: '20'
    port: 5432
    shared_buffers: 128MB
    shared_preload_libraries: pg_stat_statements
    wal_compression: 'off'
    wal_keep_segments: 8
    wal_level: replica
    wal_log_hints: 'on'
    work_mem: 8MB
  use_pg_rewind: true
  use_slots: true
retry_timeout: 10
ttl: 30

Once saved, you get the following:

---
+++
@@ -2,7 +2,8 @@
 maximum_lag_on_failover: 1048576
 postgresql:
   parameters:
-    archive_command: /bin/true
+    archive_command: 'test ! -f /u99/pgdata/PG1/archived_wal/%f && cp %p /u99/pgdata/PG1/archived_wal/%f'
     archive_mode: 'on'
     autovacuum_max_workers: '6'
     autovacuum_vacuum_scale_factor: '0.1'

Apply these changes? [y/N]: y
Configuration changed

When connecting to the database you will see, that the parameter is changed now. It is also changed on all the other nodes.

 postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] sq
psql (11.5)
Type "help" for help.

postgres=# show archive_command;
                                  archive_command
------------------------------------------------------------------------------------
 test ! -f /u99/pgdata/PG1/archived_wal/%f && cp %p /u99/pgdata/PG1/archived_wal/%f
(1 row)
2. Change parameters, that need a restart

How can parameters be changed that need a restart? Especially as we want to have a minimal downtime of the cluster.
First of all the parameter can be changed the same way as the parameters that do not need a restart using patronictl edit-config. Once the parameter is changed the status overview of the cluster gets a new column showing which node needs a restart.

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |        *        |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |        *        |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+

Afterwards there are two possibilites.

2.1 Restart node by node

If you do not want to restart the whole cluster, you have the possibility to restart each node separatly. Keep in mind, that you have to restart the Leader Node first, otherwise the change does not take effect. It is also possible to schedule the restart of a node.

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 patroni1
When should the restart take place (e.g. 2019-10-08T15:33)  [now]:
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |        *        |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |        *        |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
Are you sure you want to restart members patroni1? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2)  []:
Success: restart on member patroni1
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 patroni2
When should the restart take place (e.g. 2019-10-08T15:34)  [now]:
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |                 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |        *        |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
Are you sure you want to restart members patroni2? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2)  []:
Success: restart on member patroni2
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 patroni3
When should the restart take place (e.g. 2019-10-08T15:34)  [now]:
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |                 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |                 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
Are you sure you want to restart members patroni3? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2)  []:
Success: restart on member patroni3
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
2.2 Restart the whole cluster

In case you don’t want to restart node by node and you have the possibility of a downtime, it is also possible to restart the whole cluster (scheduled or immediately)

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1
When should the restart take place (e.g. 2019-10-08T15:37)  [now]:
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |        *        |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |        *        |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
Are you sure you want to restart members patroni1, patroni2, patroni3? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2)  []:
Success: restart on member patroni1
Success: restart on member patroni2
Success: restart on member patroni3
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
Change PostgreSQL parameters using “alter system”

Of course you can change a parameter only on one node using “alter system”, too.

 postgres@patroni1:/home/postgres/ [PG1] sq
psql (11.5)
Type "help" for help.

postgres=# show archive_Command;
 archive_command
-----------------
 /bin/false
(1 row)

postgres=# alter system set archive_command='/bin/true';
ALTER SYSTEM

postgres=# select pg_reload_conf();
 pg_reload_conf
----------------
 t
(1 row)

postgres=# show archive_command;
 archive_command
-----------------
 /bin/true
(1 row)

For sure the parameter change is not automatically applied to the replicas. The parameter is only changed on that node. All the other nodes will keep the value from the DCS. So you can change the parameter using “patronictl edit-config” or with an “alter system” command on each node. But: you also have to keep in mind the order in which the parameters are applied. The “alter system” change will persist the “patronictl edit-config” command.

Conclusion

So if you consider that there are some specialities when changing parameters in a Patroni cluster, it is quite easy to change a parameter. There are some parameters that need the same value on all nodes, e.g. max_connections, max_worker_processes, wal_level. And there are as well some parameters controlled by patroni, e.g listen_addresses and port. For a more details check the Patroni documentation . And last but not least: If you change the configuration with patronictl and one node still has another configuration. Look for a postgresql.auto.conf in the PGDATA directory. Maybe there you can find the reason for different parameters on your nodes.
If you are interested in more “Patroni Operations” blogs, check also this one Patroni operations: Switchover and Failover.

Cet article Patroni Operations – Changing Parameters est apparu en premier sur Blog dbi services.

PostgreSQL 13 will come with partitioning support for pgbench

Yann Neuhaus - Wed, 2019-10-30 08:32

A lot of people use pgbench to benchmark a PostgreSQL instance and pgbench is also heavily used by the PostgreSQL developers. While declarative partitioning was introduced in PostgreSQL 10 there was no support for that in pgbench, even in the current version, which is PostgreSQL 12. With PostgreSQL 13, which is currently in development, this will change and pgbench will be able to create a partitioned pgbench_accounts tables you then can run your benchmark against.

Having a look at the parameters of pgbench in PostgreSQL 13, two new ones pop up:

postgres@centos8pg:/home/postgres/ [pgdev] pgbench --help
pgbench is a benchmarking tool for PostgreSQL.

Usage:
pgbench [OPTION]... [DBNAME]

Initialization options:
-i, --initialize         invokes initialization mode
-I, --init-steps=[dtgvpf]+ (default "dtgvp")
run selected initialization steps
-F, --fillfactor=NUM     set fill factor
-n, --no-vacuum          do not run VACUUM during initialization
-q, --quiet              quiet logging (one message each 5 seconds)
-s, --scale=NUM          scaling factor
--foreign-keys           create foreign key constraints between tables
--index-tablespace=TABLESPACE
create indexes in the specified tablespace
--partitions=NUM         partition pgbench_accounts in NUM parts (default: 0)
--partition-method=(range|hash)
partition pgbench_accounts with this method (default: range)
--tablespace=TABLESPACE  create tables in the specified tablespace
--unlogged-tables        create tables as unlogged tables

Options to select what to run:
-b, --builtin=NAME[@W]   add builtin script NAME weighted at W (default: 1)
(use "-b list" to list available scripts)
-f, --file=FILENAME[@W]  add script FILENAME weighted at W (default: 1)
-N, --skip-some-updates  skip updates of pgbench_tellers and pgbench_branches
(same as "-b simple-update")
-S, --select-only        perform SELECT-only transactions
(same as "-b select-only")

Benchmarking options:
-c, --client=NUM         number of concurrent database clients (default: 1)
-C, --connect            establish new connection for each transaction
-D, --define=VARNAME=VALUE
define variable for use by custom script
-j, --jobs=NUM           number of threads (default: 1)
-l, --log                write transaction times to log file
-L, --latency-limit=NUM  count transactions lasting more than NUM ms as late
-M, --protocol=simple|extended|prepared
protocol for submitting queries (default: simple)
-n, --no-vacuum          do not run VACUUM before tests
-P, --progress=NUM       show thread progress report every NUM seconds
-r, --report-latencies   report average latency per command
-R, --rate=NUM           target rate in transactions per second
-s, --scale=NUM          report this scale factor in output
-t, --transactions=NUM   number of transactions each client runs (default: 10)
-T, --time=NUM           duration of benchmark test in seconds
-v, --vacuum-all         vacuum all four standard tables before tests
--aggregate-interval=NUM aggregate data over NUM seconds
--log-prefix=PREFIX      prefix for transaction time log file
(default: "pgbench_log")
--progress-timestamp     use Unix epoch timestamps for progress
--random-seed=SEED       set random seed ("time", "rand", integer)
--sampling-rate=NUM      fraction of transactions to log (e.g., 0.01 for 1%)
--show-script=NAME       show builtin script code, then exit

Common options:
-d, --debug              print debugging output
-h, --host=HOSTNAME      database server host or socket directory
-p, --port=PORT          database server port number
-U, --username=USERNAME  connect as specified database user
-V, --version            output version information, then exit
-?, --help               show this help, then exit

Report bugs to .

That should give us partitions according to the amount of partitions and partitioning method we chose, so let’s populate a new database:

postgres@centos8pg:/home/postgres/ [pgdev] psql -c "create database pgbench" postgres
CREATE DATABASE
Time: 326.715 ms
postgres@centos8pg:/home/postgres/ [pgdev] pgbench -i -s 10 --partitions=10 --partition-method=range --foreign-keys pgbench
dropping old tables...
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
creating tables...
creating 10 partitions...
generating data...
100000 of 1000000 tuples (10%) done (elapsed 0.20 s, remaining 1.78 s)
200000 of 1000000 tuples (20%) done (elapsed 0.40 s, remaining 1.62 s)
300000 of 1000000 tuples (30%) done (elapsed 0.74 s, remaining 1.73 s)
400000 of 1000000 tuples (40%) done (elapsed 1.23 s, remaining 1.85 s)
500000 of 1000000 tuples (50%) done (elapsed 1.47 s, remaining 1.47 s)
600000 of 1000000 tuples (60%) done (elapsed 1.81 s, remaining 1.21 s)
700000 of 1000000 tuples (70%) done (elapsed 2.25 s, remaining 0.97 s)
800000 of 1000000 tuples (80%) done (elapsed 2.46 s, remaining 0.62 s)
900000 of 1000000 tuples (90%) done (elapsed 2.81 s, remaining 0.31 s)
1000000 of 1000000 tuples (100%) done (elapsed 3.16 s, remaining 0.00 s)
vacuuming...
creating primary keys...
creating foreign keys...
done in 5.78 s (drop tables 0.00 s, create tables 0.07 s, generate 3.29 s, vacuum 0.84 s, primary keys 0.94 s, foreign keys 0.65 s).

The pgbench_accounts table should now be partitioned by range:

postgres@centos8pg:/home/postgres/ [pgdev] psql -c "\d+ pgbench_accounts" pgbench
Partitioned table "public.pgbench_accounts"
Column  |     Type      | Collation | Nullable | Default | Storage  | Stats target | Description
----------+---------------+-----------+----------+---------+----------+--------------+-------------
aid      | integer       |           | not null |         | plain    |              |
bid      | integer       |           |          |         | plain    |              |
abalance | integer       |           |          |         | plain    |              |
filler   | character(84) |           |          |         | extended |              |
Partition key: RANGE (aid)
Indexes:
"pgbench_accounts_pkey" PRIMARY KEY, btree (aid)
Foreign-key constraints:
"pgbench_accounts_bid_fkey" FOREIGN KEY (bid) REFERENCES pgbench_branches(bid)
Referenced by:
TABLE "pgbench_history" CONSTRAINT "pgbench_history_aid_fkey" FOREIGN KEY (aid) REFERENCES pgbench_accounts(aid)
Partitions: pgbench_accounts_1 FOR VALUES FROM (MINVALUE) TO (100001),
pgbench_accounts_10 FOR VALUES FROM (900001) TO (MAXVALUE),
pgbench_accounts_2 FOR VALUES FROM (100001) TO (200001),
pgbench_accounts_3 FOR VALUES FROM (200001) TO (300001),
pgbench_accounts_4 FOR VALUES FROM (300001) TO (400001),
pgbench_accounts_5 FOR VALUES FROM (400001) TO (500001),
pgbench_accounts_6 FOR VALUES FROM (500001) TO (600001),
pgbench_accounts_7 FOR VALUES FROM (600001) TO (700001),
pgbench_accounts_8 FOR VALUES FROM (700001) TO (800001),
pgbench_accounts_9 FOR VALUES FROM (800001) TO (900001)

The same should work for hash partitioning:

postgres@centos8pg:/home/postgres/ [pgdev] pgbench -i -s 10 --partitions=10 --partition-method=hash --foreign-keys pgbench
dropping old tables...
creating tables...
creating 10 partitions...
generating data...
100000 of 1000000 tuples (10%) done (elapsed 0.19 s, remaining 1.69 s)
200000 of 1000000 tuples (20%) done (elapsed 0.43 s, remaining 1.71 s)
300000 of 1000000 tuples (30%) done (elapsed 0.67 s, remaining 1.55 s)
400000 of 1000000 tuples (40%) done (elapsed 1.03 s, remaining 1.54 s)
500000 of 1000000 tuples (50%) done (elapsed 1.22 s, remaining 1.22 s)
600000 of 1000000 tuples (60%) done (elapsed 1.59 s, remaining 1.06 s)
700000 of 1000000 tuples (70%) done (elapsed 1.80 s, remaining 0.77 s)
800000 of 1000000 tuples (80%) done (elapsed 2.16 s, remaining 0.54 s)
900000 of 1000000 tuples (90%) done (elapsed 2.36 s, remaining 0.26 s)
1000000 of 1000000 tuples (100%) done (elapsed 2.69 s, remaining 0.00 s)
vacuuming...
creating primary keys...
creating foreign keys...
done in 4.99 s (drop tables 0.10 s, create tables 0.08 s, generate 2.74 s, vacuum 0.84 s, primary keys 0.94 s, foreign keys 0.30 s).
postgres@centos8pg:/home/postgres/ [pgdev] psql -c "\d+ pgbench_accounts" pgbench
Partitioned table "public.pgbench_accounts"
Column  |     Type      | Collation | Nullable | Default | Storage  | Stats target | Description
----------+---------------+-----------+----------+---------+----------+--------------+-------------
aid      | integer       |           | not null |         | plain    |              |
bid      | integer       |           |          |         | plain    |              |
abalance | integer       |           |          |         | plain    |              |
filler   | character(84) |           |          |         | extended |              |
Partition key: HASH (aid)
Indexes:
"pgbench_accounts_pkey" PRIMARY KEY, btree (aid)
Foreign-key constraints:
"pgbench_accounts_bid_fkey" FOREIGN KEY (bid) REFERENCES pgbench_branches(bid)
Referenced by:
TABLE "pgbench_history" CONSTRAINT "pgbench_history_aid_fkey" FOREIGN KEY (aid) REFERENCES pgbench_accounts(aid)
Partitions: pgbench_accounts_1 FOR VALUES WITH (modulus 10, remainder 0),
pgbench_accounts_10 FOR VALUES WITH (modulus 10, remainder 9),
pgbench_accounts_2 FOR VALUES WITH (modulus 10, remainder 1),
pgbench_accounts_3 FOR VALUES WITH (modulus 10, remainder 2),
pgbench_accounts_4 FOR VALUES WITH (modulus 10, remainder 3),
pgbench_accounts_5 FOR VALUES WITH (modulus 10, remainder 4),
pgbench_accounts_6 FOR VALUES WITH (modulus 10, remainder 5),
pgbench_accounts_7 FOR VALUES WITH (modulus 10, remainder 6),
pgbench_accounts_8 FOR VALUES WITH (modulus 10, remainder 7),
pgbench_accounts_9 FOR VALUES WITH (modulus 10, remainder 8).

Looks fine. Now you can easily benchmark against a partitioned pgbench_accounts table.

Cet article PostgreSQL 13 will come with partitioning support for pgbench est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator