zabetak commented on code in PR #6053:
URL: https://github.com/apache/hive/pull/6053#discussion_r2315150897


##########
common/src/java/org/apache/hive/http/JMXJsonServlet.java:
##########
@@ -241,40 +241,34 @@ private void listBeans(JsonGenerator jg, ObjectName qry, 
String attribute,
         } catch (AttributeNotFoundException e) {
           // If the modelerType attribute was not found, the class name is used
           // instead.
-          LOG.error("getting attribute " + prs + " of " + oname
-              + " threw an exception", e);
+          LOG.error("getting attribute {} of {} threw an exception", prs, 
oname, e);

Review Comment:
   I am wondering if the exception serialization remains the same after this 
change since we no longer use the API that accepts a `Throwable`.



##########
kudu-handler/src/java/org/apache/hadoop/hive/kudu/KuduHiveUtils.java:
##########
@@ -104,11 +104,10 @@ public static void 
importCredentialsFromCurrentSubject(KuduClient client) {
         // 'client'. This is necessary if we want to support a job which
         // reads from one cluster and writes to another.
         if (!tok.getService().equals(service)) {
-          LOG.debug("Not importing credentials for service " + service +
-              "(expecting service " + service + ")");
+          LOG.debug("Not importing credentials for service {} (expecting 
service {})", service, service);

Review Comment:
   Passing the same object two times does not make much sense but I guess 
outside the scope of the PR.



##########
iceberg/iceberg-catalog/src/main/java/org/apache/iceberg/hive/HiveOperationsBase.java:
##########
@@ -85,7 +85,7 @@ default Table loadHmsTable() throws TException, 
InterruptedException {
     try {
       return metaClients().run(client -> client.getTable(database(), table()));
     } catch (NoSuchObjectException nte) {
-      LOG.trace("Table not found {}", database() + "." + table(), nte);
+      LOG.trace("Table not found {}.{}", database(), table(), nte);

Review Comment:
   Two placeholders but three arguments.



##########
llap-client/src/java/org/apache/hadoop/hive/llap/registry/impl/LlapFixedRegistryImpl.java:
##########
@@ -144,12 +144,11 @@ public FixedServiceInstance(String host) {
           if (NetUtils.isLocalAddress(inetAddress)) {
             InetSocketAddress socketAddress = new InetSocketAddress(0);
             socketAddress = NetUtils.getConnectAddress(socketAddress);
-            LOG.info("Adding host identified as local: " + host + " as "
-                + socketAddress.getHostName());
+            LOG.info("Adding host identified as local: {} as {}", host, 
socketAddress.getHostName());
             host = socketAddress.getHostName();
           }
         } catch (UnknownHostException e) {
-          LOG.warn("Ignoring resolution issues for host: " + host, e);
+          LOG.warn("Ignoring resolution issues for host: {}", host, e);

Review Comment:
   One placeholder, two arguments.



##########
llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/SerDeEncodedDataReader.java:
##########
@@ -503,8 +503,7 @@ public void finalizeStripe(
       List<ColumnEncoding> allEnc = footer.getColumnsList();
       OrcProto.StripeInformation si = dirEntry.build();
       if (LlapIoImpl.LOG.isTraceEnabled()) {
-        LlapIoImpl.LOG.trace(("Finalizing stripe " + footer.build() + " => " + 
si)
-            .replace('\n', ' '));
+        LlapIoImpl.LOG.trace("Finalizing stripe {} => {}", footer.build(), si);

Review Comment:
   The `replace` logic was removed. Is it intentional?



##########
metastore/src/java/org/apache/hadoop/hive/metastore/HiveProtoEventsCleanerTask.java:
##########
@@ -127,14 +127,14 @@ private void cleanupDir(Path eventsBasePath) {
       for (FileStatus dir : statuses) {
         try {
           deleteDirByOwner(fs, dir);
-          LOG.info("Deleted expired proto events dir: " + dir.getPath());
+          LOG.info("Deleted expired proto events dir: {}", dir.getPath());
         } catch (IOException ioe) {
           // Log error and continue to delete other expired dirs.
-          LOG.error("Error deleting expired proto events dir " + 
dir.getPath(), ioe);
+          LOG.error("Error deleting expired proto events dir {}", 
dir.getPath(), ioe);

Review Comment:
   One placeholder, two args.



##########
metastore/src/java/org/apache/hadoop/hive/metastore/HiveProtoEventsCleanerTask.java:
##########
@@ -127,14 +127,14 @@ private void cleanupDir(Path eventsBasePath) {
       for (FileStatus dir : statuses) {
         try {
           deleteDirByOwner(fs, dir);
-          LOG.info("Deleted expired proto events dir: " + dir.getPath());
+          LOG.info("Deleted expired proto events dir: {}", dir.getPath());
         } catch (IOException ioe) {
           // Log error and continue to delete other expired dirs.
-          LOG.error("Error deleting expired proto events dir " + 
dir.getPath(), ioe);
+          LOG.error("Error deleting expired proto events dir {}", 
dir.getPath(), ioe);
         }
       }
     } catch (IOException e) {
-      LOG.error("Error while trying to delete expired proto events from " + 
eventsBasePath, e);
+      LOG.error("Error while trying to delete expired proto events from {}", 
eventsBasePath, e);

Review Comment:
   One placeholders, two args.



##########
serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyPrimitive.java:
##########
@@ -78,12 +78,10 @@ public void logExceptionMessage(ByteArrayRef bytes, int 
start, int length, Strin
     try {
       if(LOG.isDebugEnabled()) {
         String byteData = Text.decode(bytes.getData(), start, length);
-        LOG.debug("Data not in the " + dataType
-            + " data type range so converted to null. Given data is :" +
-                    byteData, new Exception("For debugging purposes"));
+        LOG.debug("Data not in the {} data type range so converted to null. 
Given data is : {}", dataType, byteData);
       }
     } catch (CharacterCodingException e1) {
-      LOG.debug("Data not in the " + dataType + " data type range so converted 
to null.", e1);
+      LOG.debug("Data not in the {} data type range so converted to null.", 
dataType, e1);

Review Comment:
   One placeholder, two args.



##########
serde/src/java/org/apache/hadoop/hive/serde2/lazy/fast/LazySimpleDeserializeRead.java:
##########
@@ -1261,12 +1261,10 @@ public void logExceptionMessage(byte[] bytes, int 
bytesStart, int bytesLength, S
     try {
       if (LOG.isDebugEnabled()) {
         String byteData = Text.decode(bytes, bytesStart, bytesLength);
-        LOG.debug("Data not in the " + dataType
-            + " data type range so converted to null. Given data is :" +
-                    byteData, new Exception("For debugging purposes"));
+        LOG.debug("Data not in the {} data type range so converted to null. 
Given data is : {}", dataType, byteData);

Review Comment:
   Was the exception stacktrace removed intentionally?



##########
service/src/java/org/apache/hive/service/CompositeService.java:
##########
@@ -141,7 +141,7 @@ public void run() {
         // Stop the Composite Service
         compositeService.stop();
       } catch (Throwable t) {
-        LOG.info("Error stopping " + compositeService.getName(), t);
+        LOG.info("Error stopping {}", compositeService.getName(), t);

Review Comment:
   placeholders~=args



##########
service/src/java/org/apache/hive/service/cli/CLIService.java:
##########
@@ -488,22 +487,22 @@ public OperationStatus getOperationStatus(OperationHandle 
opHandle, boolean getP
         operation.getBackgroundHandle().get(timeout, TimeUnit.MILLISECONDS);
       } catch (TimeoutException e) {
         // No Op, return to the caller since long polling timeout has expired
-        LOG.trace(opHandle + ": Long polling timed out");
+        LOG.trace("{}: Long polling timed out", opHandle);
       } catch (CancellationException e) {
         // The background operation thread was cancelled
-        LOG.trace(opHandle + ": The background operation was cancelled", e);
+        LOG.trace("{}: The background operation was cancelled", opHandle, e);

Review Comment:
   placeholders~=args



##########
service/src/java/org/apache/hive/service/auth/ldap/LdapSearch.java:
##########
@@ -198,8 +198,7 @@ private SearchResultHandler execute(Collection<String> 
baseDns, Query query) {
           searchResults.add(searchResult);
         }
       } catch (NamingException ex) {
-        LOG.debug("Exception happened for query '" + query.getFilter() +
-            "' with base DN '" + aBaseDn + "'", ex);
+        LOG.debug("Exception happened for query '{}' with base DN '{}'", 
query.getFilter(), aBaseDn, ex);

Review Comment:
   placeholders~=args



##########
service/src/java/org/apache/hive/service/cli/CLIService.java:
##########
@@ -488,22 +487,22 @@ public OperationStatus getOperationStatus(OperationHandle 
opHandle, boolean getP
         operation.getBackgroundHandle().get(timeout, TimeUnit.MILLISECONDS);
       } catch (TimeoutException e) {
         // No Op, return to the caller since long polling timeout has expired
-        LOG.trace(opHandle + ": Long polling timed out");
+        LOG.trace("{}: Long polling timed out", opHandle);
       } catch (CancellationException e) {
         // The background operation thread was cancelled
-        LOG.trace(opHandle + ": The background operation was cancelled", e);
+        LOG.trace("{}: The background operation was cancelled", opHandle, e);
       } catch (ExecutionException e) {
         // Note: Hive ops do not use the normal Future failure path, so this 
will not happen
         //       in case of actual failure; the Future will just be done.
         // The background operation thread was aborted
-        LOG.warn(opHandle + ": The background operation was aborted", e);
+        LOG.warn("{}: The background operation was aborted", opHandle, e);

Review Comment:
   placeholders~=args



##########
service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java:
##########
@@ -230,7 +230,7 @@ protected int processCmd(String cmd) {
         try {
           closeOperation(opHandle);
         } catch (HiveSQLException e) {
-          LOG.warn("Failed to close operation for command in .hiverc file.", 
e);
+        LOG.warn("Failed to close operation for command in .hiverc file.", e);

Review Comment:
   Wrong indent



##########
service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java:
##########
@@ -262,13 +262,13 @@ private void processGlobalInitFile() {
           hivercFile = new File(hivercFile, SessionManager.HIVERCFILE);
         }
         if (hivercFile.isFile()) {
-          LOG.info("Running global init file: " + hivercFile);
+        LOG.info("Running global init file: {}", hivercFile);
           int rc = processor.processFile(hivercFile.getAbsolutePath());
           if (rc != 0) {
-            LOG.error("Failed on initializing global .hiverc file");
+          LOG.error("Failed on initializing global .hiverc file");

Review Comment:
   wrong indent



##########
service/src/java/org/apache/hive/service/cli/session/SessionManager.java:
##########
@@ -640,9 +638,9 @@ private void closeSessionInternal(HiveSession session) 
throws HiveSQLException {
         // Asynchronously shutdown this instance of HiveServer2,
         // if there are no active client sessions
         if (getOpenSessionCount() == 0) {
-          LOG.info("This instance of HiveServer2 has been removed from the 
list of server "
-              + "instances available for dynamic service discovery. "
-              + "The last client session has ended - will shutdown now.");
+        LOG.info("This instance of HiveServer2 has been removed from the list 
of server "

Review Comment:
   wrong indent



##########
shims/common/src/main/java/org/apache/hadoop/hive/io/HdfsUtils.java:
##########
@@ -203,9 +202,9 @@ private HadoopFileStatus(Configuration conf, FileSystem fs, 
FileStatus fileStatu
       try {
         aclStatus = fs.getAclStatus(file);
       } catch (Exception e) {
-        LOG.info("Skipping ACL inheritance: File system for path " + file + " 
" +
-                "does not support ACLs but dfs.namenode.acls.enabled is set to 
true. ");
-        LOG.debug("The details are: " + e, e);
+        LOG.info("Skipping ACL inheritance: File system for path {} does not 
support ACLs " +
+                "but dfs.namenode.acls.enabled is set to true.", file);
+        LOG.debug("The details are: {}", e);

Review Comment:
   ```suggestion
           LOG.debug("The details are:", e);
   ```



##########
common/src/java/org/apache/hive/http/JMXJsonServlet.java:
##########
@@ -241,40 +241,34 @@ private void listBeans(JsonGenerator jg, ObjectName qry, 
String attribute,
         } catch (AttributeNotFoundException e) {
           // If the modelerType attribute was not found, the class name is used
           // instead.
-          LOG.error("getting attribute " + prs + " of " + oname
-              + " threw an exception", e);
+          LOG.error("getting attribute {} of {} threw an exception", prs, 
oname, e);

Review Comment:
   In addition it is strange that we use **two** placeholders `{}` but we pass 
**three** arguments.



##########
llap-client/src/java/org/apache/hadoop/hive/llap/LlapBaseRecordReader.java:
##########
@@ -77,7 +77,7 @@ public synchronized void close() throws IOException {
       try {
         din.close();
       } catch (Exception err) {
-        LOG.error("Error closing input stream:" + err.getMessage(), err);
+        LOG.error("Error closing input stream: {}", err.getMessage(), err);

Review Comment:
   How about something simpler:
   ```java
   LOG.error("Error closing input stream:", err);
   ```
   



##########
llap-common/src/java/org/apache/hadoop/hive/llap/LlapUtil.java:
##########
@@ -184,7 +184,7 @@ public static RPC.Server startProtocolServer(int srvPort, 
int numHandlers,
           numHandlers, impl, secretManager, provider, aclVars);
       server.start();
     } catch (IOException e) {
-      LOG.error("Failed to run RPC Server on port: " + srvPort, e);
+      LOG.error("Failed to run RPC Server on port: {}", srvPort, e);

Review Comment:
   One placeholder, two args.



##########
llap-ext-client/src/java/org/apache/hadoop/hive/llap/LlapBaseInputFormat.java:
##########
@@ -339,7 +339,7 @@ private static void closeConnections(String handleId, 
List<Connection> handleCon
         try {
           conn.close();
         } catch (Exception err) {
-          LOG.error("Error while closing connection for " + handleId, err);
+          LOG.error("Error while closing connection for {}", handleId, err);

Review Comment:
   One placeholder, two args.



##########
llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/metrics/BlacklistingLlapMetricsListener.java:
##########
@@ -171,10 +167,10 @@ public void newClusterMetrics(Map<String, 
LlapMetricsCollector.LlapMetrics> newM
       if (emptyExecutorsWithoutSlowest > maxAverageTimeMaxExecutors * 
this.emptyExecutorsThreshold) {
         // Seems like a good candidate, let's try to blacklist
         try {
-          LOG.debug("Trying to blacklist node: " + maxAverageTimeIdentity);
+          LOG.debug("Trying to blacklist node: {}", maxAverageTimeIdentity);
           setCapacity(maxAverageTimeIdentity, 0, 0);
         } catch (Throwable t) {
-          LOG.debug("Can not blacklist node: " + maxAverageTimeIdentity, t);
+          LOG.debug("Can not blacklist node: {}", maxAverageTimeIdentity, t);

Review Comment:
   One placeholder, two args.



##########
llap-client/src/java/org/apache/hadoop/hive/registry/impl/ZkRegistryBase.java:
##########
@@ -705,7 +703,7 @@ private int extractSeqNum(String nodeName) {
     try {
       return Integer.parseInt(ephSeqVersionStr);
     } catch (NumberFormatException e) {
-      LOG.error("Cannot parse " + ephSeqVersionStr + " from " + nodeName, e);
+      LOG.error("Cannot parse {} from {}", ephSeqVersionStr, nodeName, e);

Review Comment:
   Two placeholders, three args.



##########
metastore/src/java/org/apache/hadoop/hive/metastore/HiveProtoEventsCleanerTask.java:
##########
@@ -161,7 +161,7 @@ public Object run() throws Exception {
           }
         });
       } catch (InterruptedException ie) {
-        LOG.error("Could not delete " + eventsDir.getPath() + " for UGI: " + 
ugi, ie);
+        LOG.error("Could not delete {} for UGI: {}", eventsDir.getPath(), ugi, 
ie);

Review Comment:
   Two placeholders, three args.



##########
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreUtils.java:
##########
@@ -97,7 +97,7 @@ public static Deserializer getDeserializer(Configuration 
conf, org.apache.hadoop
       deserializer.initialize(conf, properties, null);
     } catch (SerDeException e) {
       if (!skipConfError) {
-        LOG.error("error in initSerDe: " + e.getClass().getName() + " " + 
e.getMessage(), e);
+        LOG.error("error in initSerDe: {} {}", e.getClass().getName(), 
e.getMessage(), e);

Review Comment:
   Two placeholders, three args.



##########
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreUtils.java:
##########
@@ -141,8 +141,7 @@ static public Deserializer getDeserializer(Configuration 
conf,
     } catch (RuntimeException e) {
       throw e;
     } catch (Throwable e) {
-      LOG.error("error in initSerDe: " + e.getClass().getName() + " "
-          + e.getMessage(), e);
+      LOG.error("error in initSerDe: {} {}", e.getClass().getName(), 
e.getMessage(), e);

Review Comment:
   Two placeholders, three args.



##########
service/src/java/org/apache/hive/service/CompositeService.java:
##########
@@ -105,7 +105,7 @@ public synchronized void decommission() {
       try {
         service.decommission();
       } catch (Throwable t) {
-        LOG.info("Error decommissioning " + service.getName(), t);
+        LOG.info("Error decommissioning {}", service.getName(), t);

Review Comment:
   One placeholder, two args.



##########
service/src/java/org/apache/hive/service/CompositeService.java:
##########
@@ -118,7 +118,7 @@ private synchronized void stop(int numOfServicesStarted) {
       try {
         service.stop();
       } catch (Throwable t) {
-        LOG.info("Error stopping " + service.getName(), t);
+        LOG.info("Error stopping {}", service.getName(), t);

Review Comment:
   placeholders!=args



##########
serde/src/java/org/apache/hadoop/hive/serde2/lazy/fast/LazySimpleDeserializeRead.java:
##########
@@ -1261,12 +1261,10 @@ public void logExceptionMessage(byte[] bytes, int 
bytesStart, int bytesLength, S
     try {
       if (LOG.isDebugEnabled()) {
         String byteData = Text.decode(bytes, bytesStart, bytesLength);
-        LOG.debug("Data not in the " + dataType
-            + " data type range so converted to null. Given data is :" +
-                    byteData, new Exception("For debugging purposes"));
+        LOG.debug("Data not in the {} data type range so converted to null. 
Given data is : {}", dataType, byteData);
       }
     } catch (CharacterCodingException e1) {
-      LOG.debug("Data not in the " + dataType + " data type range so converted 
to null.", e1);
+      LOG.debug("Data not in the {} data type range so converted to null.", 
dataType, e1);

Review Comment:
   One placeholders, two args.



##########
service/src/java/org/apache/hive/service/ServiceUtils.java:
##########
@@ -61,7 +61,7 @@ public static void cleanup(Logger log, java.io.Closeable... 
closeables) {
           c.close();
         } catch(IOException e) {
           if (log != null && log.isDebugEnabled()) {
-            log.debug("Exception in closing " + c, e);
+            log.debug("Exception in closing {}", c, e);

Review Comment:
   placeholders~=args



##########
service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java:
##########
@@ -122,7 +122,7 @@ public void apply(DirSearch ldap, String user) throws 
AuthenticationException {
           groupDns.add(groupDn);
         } catch (NamingException e) {
           LOG.warn("Cannot find DN for group", e);
-          LOG.debug("Cannot find DN for group " + groupId, e);
+          LOG.debug("Cannot find DN for group {}", groupId, e);

Review Comment:
   placeholders~=args



##########
service/src/java/org/apache/hive/service/CompositeService.java:
##########
@@ -71,7 +71,7 @@ public synchronized void start() {
       }
       super.start();
     } catch (Throwable e) {
-      LOG.error("Error starting services " + getName(), e);
+      LOG.error("Error starting services {}", getName(), e);

Review Comment:
   One placeholder, two args.



##########
service/src/java/org/apache/hive/service/ServiceOperations.java:
##########
@@ -130,9 +130,7 @@ public static Exception stopQuietly(Service service) {
     try {
       stop(service);
     } catch (Exception e) {
-      LOG.warn("When stopping the service " + service.getName()
-                   + " : " + e,
-               e);
+      LOG.warn("When stopping the service {} : {}", service.getName(), e, e);

Review Comment:
   placeholders~=args



##########
service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java:
##########
@@ -262,13 +262,13 @@ private void processGlobalInitFile() {
           hivercFile = new File(hivercFile, SessionManager.HIVERCFILE);
         }
         if (hivercFile.isFile()) {
-          LOG.info("Running global init file: " + hivercFile);
+        LOG.info("Running global init file: {}", hivercFile);

Review Comment:
   wrong indent



##########
service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java:
##########
@@ -262,13 +262,13 @@ private void processGlobalInitFile() {
           hivercFile = new File(hivercFile, SessionManager.HIVERCFILE);
         }
         if (hivercFile.isFile()) {
-          LOG.info("Running global init file: " + hivercFile);
+        LOG.info("Running global init file: {}", hivercFile);
           int rc = processor.processFile(hivercFile.getAbsolutePath());
           if (rc != 0) {
-            LOG.error("Failed on initializing global .hiverc file");
+          LOG.error("Failed on initializing global .hiverc file");
           }
         } else {
-          LOG.debug("Global init file " + hivercFile + " does not exist");
+        LOG.debug("Global init file {} does not exist", hivercFile);

Review Comment:
   wrong indent



##########
service/src/java/org/apache/hive/service/cli/session/SessionManager.java:
##########
@@ -318,12 +318,12 @@ public void run() {
             if (sessionTimeout > 0 && session.getLastAccessTime() + 
sessionTimeout <= current
                 && (!checkOperation || session.getNoOperationTime() > 
sessionTimeout)) {
               SessionHandle handle = session.getSessionHandle();
-              LOG.warn("Session " + handle + " is Timed-out (last access : " +
-                  new Date(session.getLastAccessTime()) + ") and will be 
closed");
+              LOG.warn("Session {} is Timed-out (last access : {}) and will be 
closed",
+                      handle, new Date(session.getLastAccessTime()));
               try {
                 closeSession(handle);
               } catch (HiveSQLException e) {
-                LOG.warn("Exception is thrown closing session " + handle, e);
+                LOG.warn("Exception is thrown closing session {}", handle, e);

Review Comment:
   placeholders~=args



##########
service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java:
##########
@@ -869,7 +868,8 @@ private void closeTimedOutOperations(List<Operation> 
operations) {
         try {
           operation.close();
         } catch (Exception e) {
-          LOG.warn("Exception is thrown closing timed-out operation, reported 
open_operations metrics may be incorrect " + operation.getHandle(), e);
+          LOG.warn("Exception is thrown closing timed-out operation, " +
+                  "reported open_operations metrics may be incorrect {}", 
operation.getHandle(), e);

Review Comment:
   placeholders~=args



##########
service/src/java/org/apache/hive/service/cli/session/SessionManager.java:
##########
@@ -391,8 +391,7 @@ private void cleanupLoggingRootDir() {
       try {
         FileUtils.forceDelete(operationLogRootDir);
       } catch (Exception e) {
-        LOG.warn("Failed to cleanup root dir of HS2 logging: " + 
operationLogRootDir
-            .getAbsolutePath(), e);
+        LOG.warn("Failed to cleanup root dir of HS2 logging: {}", 
operationLogRootDir.getAbsolutePath(), e);

Review Comment:
   placeholders~=args



##########
service/src/java/org/apache/hive/service/cli/session/SessionManager.java:
##########
@@ -264,25 +264,25 @@ private void initOperationLogRootDir() {
     isOperationLogEnabled = true;
 
     if (operationLogRootDir.exists() && !operationLogRootDir.isDirectory()) {
-      LOG.warn("The operation log root directory exists, but it is not a 
directory: " +
+      LOG.warn("The operation log root directory exists, but it is not a 
directory: {}",
           operationLogRootDir.getAbsolutePath());
       isOperationLogEnabled = false;
     }
 
     if (!operationLogRootDir.exists()) {
       if (!operationLogRootDir.mkdirs()) {
-        LOG.warn("Unable to create operation log root directory: " +
+        LOG.warn("Unable to create operation log root directory: {}",
             operationLogRootDir.getAbsolutePath());
         isOperationLogEnabled = false;
       }
     }
 
     if (isOperationLogEnabled) {
-      LOG.info("Operation log root directory is created: " + 
operationLogRootDir.getAbsolutePath());
+      LOG.info("Operation log root directory is created: {}", 
operationLogRootDir.getAbsolutePath());
       try {
         FileUtils.forceDeleteOnExit(operationLogRootDir);
       } catch (IOException e) {
-        LOG.warn("Failed to schedule cleanup HS2 operation logging root dir: " 
+
+        LOG.warn("Failed to schedule cleanup HS2 operation logging root dir: 
{} " +

Review Comment:
   placeholders~=args



##########
shims/common/src/main/java/org/apache/hadoop/hive/io/HdfsUtils.java:
##########
@@ -146,9 +146,8 @@ static void setFullFileStatus(Configuration conf, 
HdfsUtils.HadoopFileStatus sou
         }
       }
     } catch (Exception e) {
-      LOG.warn(
-              "Unable to inherit permissions for file " + target + " from file 
" + sourceStatus.getFileStatus().getPath(),
-              e.getMessage());
+      LOG.warn("Unable to inherit permissions for file {} from file {}",

Review Comment:
   placeholders~=args



##########
shims/common/src/main/java/org/apache/hadoop/hive/io/HdfsUtils.java:
##########
@@ -121,9 +121,9 @@ static void setFullFileStatus(Configuration conf, 
HdfsUtils.HadoopFileStatus sou
               run(fsShell, new String[]{"-setfacl", "-R", "--set", aclEntry, 
target.toString()});
 
             } catch (Exception e) {
-              LOG.info("Skipping ACL inheritance: File system for path " + 
target + " " +
-                  "does not support ACLs but dfs.namenode.acls.enabled is set 
to true. ");
-              LOG.debug("The details are: " + e, e);
+              LOG.info("Skipping ACL inheritance: File system for path {} does 
not support ACLs " +
+                      "but dfs.namenode.acls.enabled is set to true.", target);
+              LOG.debug("The details are: {}", e);

Review Comment:
   ```suggestion
                 LOG.debug("The details are:", e);
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to