11/26/2015

Updating FreeBSD's ports: a fully automated script to (slowly) rebuild everything

FreeBSD is a really good operating system to work with, but the daily process of keeping the port tree up-to-date could easily become a painful process.

Whenever any developer moves an important file between ports (happens a lot with Gnome3) a simple 'portmaster -a' will fail, complaining the old- and the new package willing to write to the same place.

You can always fix the issue by removing the whole package and reinstalling, but it breaks the anyway very long (e.g. hours) update procedure, and you'll have to wait another couple fo hours for the update to finish... Until you run into another package-move, and have to start over again.

After working with FreeBSD for a while, the most useful update procedure turns out to be similar to what they propose in 'man portmaster' :

     Using portmaster to do a complete reinstallation of all your ports:
           1. portmaster --list-origins > ~/installed-port-list
           2. Update your ports tree
           3. portmaster -ty --clean-distfiles
           4. portmaster --check-port-dbdir
           5. portmaster -Faf
           6. pkg_delete -a
           7. rm -rf /usr/local/lib/compat/pkg
           8. Back up any files in /usr/local you wish to save,
              such as configuration files in /usr/local/etc
           9. Manually check /usr/local and /var/db/pkg
              to make sure that they are really empty
           10. Re-install portmaster
           11. portmaster `cat ~/installed-port-list`

     You probably want to use the -D option for the installation and then run
     --clean-distfiles [-y] again when you are done.  You might also want to
     consider using the --force-config option when installing the new ports.

     Alternatively you could use portmaster -a -f -D to do an ``in place''
     update of your ports.  If that process is interrupted for any reason you
     can use portmaster -a -f -D -R to avoid rebuilding ports already rebuilt
     on previous runs.  However the first method (delete everything and rein-
     stall) is preferred.

Well, this painful (e.g. if you have lots of packages running on not-state-of-the-art hardware, could be up to 24 hours, or sometimes days) process really works. Whatever entanglement I've had with gnome packages - reinstalling everything obviously always helped.

Some time ago I bought 6 used DELL desktops to set up a home-made cluster, and today I had to update their packages, because I need to install a new port on them.

The automatic installation via 'portmaster -a' failed. As I really didn't want to go over the painful solving of the problem on each machine, I rather decided to use portmaster's documentation's suggestion, e.g. to reinstall everything on those rigs.

Which, unfurtonately turns out not to be completely automatic, and rather uncomfortable. The automatism is jeopardized by portmaster and some other tools stopping and asking for the user's confirmation over tty1, which again breaks a very long procedure.

Since the original inception of portmaster they might had different default swithches, but as of FreeBSD 10.1 and 10.2, the following script turns out to be working without interruption:

portmaster --list-origins > ~/installed-port-list
portsnap fetch update
# backup /var/db/ports with our existing options
cd /var/db
tar cvzf ports.tar.gz ports
portmaster -ty --clean-distfiles
portmaster --check-port-dbdir -y
portmaster -Faf
pkg delete -a -y
rm -rf /usr/local/lib/compat/pkg
rm -rf /var/db/pkg
cd /var/db
tar xvzf ports.tar.gz
cd /usr/ports/ports-mgmt/pkg
make install clean
cd /usr/ports/ports-mgmt/portmaster
make install clean
cd ~
portmaster -d -y --no-confirm --delete-packages --update-if-newer `cat ~/installed-port-list`

The changes needed were mainly the portmaster flags, but the /var/db/ports tree is also backed up & restored: that's the location where the answers to those blue 'dialog4ports' windows' are stored:

...
cd /var/db
tar cvzf ports.tar.gz ports
...
cd /var/db
tar xvzf ports.tar.gz
...

It is very long to do it this way, as we delete and recompile every package of the system. But as it can be completely automated with the script above, I don't mind letting the computer do it's job overnight if that saves me some trouble - e.g. solving version-issues between recently updated FreeBSD port packages...

Let us know how you manage to automatically update your FreeBSD boxes in the comment section below. Portmaster's suggestion is "brutal", but is quite effective in the end... I'm sure there are other solutions as well.


11/23/2015

Apache Karaf: no matching cipher found: client aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se server

During a recent deployment of a custom Karaf distribution via the karaf-maven-plugin, I've experienced a very strange behaviour when deployed to FreeBSD rather than my developer Mac OS rig.

Connecting thru SSH failed, so the logging has been increased:

dfi:~ doma$ ssh karaf@optiplex1 -p 8100
no matching cipher found: client aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se server 

Here's Karaf's exception logged:

16:24:03,890 INFO  8]-nio2-thread-1 125 shd.server.session.ServerSession Server session created from /192.168.0.21:64951
16:24:03,894 DEBUG 8]-nio2-thread-1 125 shd.server.session.ServerSession Client version string: SSH-2.0-OpenSSH_6.2
16:24:03,900 DEBUG 8]-nio2-thread-1 125 d.common.session.AbstractSession Send SSH_MSG_KEXINIT
16:24:03,901 DEBUG 8]-nio2-thread-1 125 d.common.session.AbstractSession Received SSH_MSG_KEXINIT
16:24:03,902 WARN  8]-nio2-thread-1 125 d.common.session.AbstractSession Exception caught
jjava.lang.IllegalStateException: Unable to negotiate key exchange for encryption algorithms (client to server) (client: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lys
ator.liu.se / server: )
        at org.apache.sshd.common.session.AbstractSession.negotiate(AbstractSession.java:1159)[125:org.apache.sshd.core:0.14.0]
        at org.apache.sshd.common.session.AbstractSession.doHandleMessage(AbstractSession.java:388)[125:org.apache.sshd.core:0.14.0]
        at org.apache.sshd.common.session.AbstractSession.handleMessage(AbstractSession.java:326)[125:org.apache.sshd.core:0.14.0]
        at org.apache.sshd.common.session.AbstractSession.decode(AbstractSession.java:780)[125:org.apache.sshd.core:0.14.0]
        at org.apache.sshd.common.session.AbstractSession.messageReceived(AbstractSession.java:308)[125:org.apache.sshd.core:0.14.0]
        at org.apache.sshd.common.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:54)
        at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:184)
        at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:170)
        at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
        at java.security.AccessController.doPrivileged(Native Method)[:1.8.0_60]
        at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[125:org.apache.sshd.core:0.14.0]
        at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.8.0_60]
        at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.8.0_60]
        at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.8.0_60]
        at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:276)[:1.8.0_60]
        at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:297)[:1.8.0_60]
        at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:420)[:1.8.0_60]
        at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:170)[125:org.apache.sshd.core:0.14.0]
        at org.apache.sshd.common.io.nio2.Nio2Acceptor$AcceptCompletionHandler.onCompleted(Nio2Acceptor.java:135)[125:org.apache.sshd.core:0.14.0]
        at org.apache.sshd.common.io.nio2.Nio2Acceptor$AcceptCompletionHandler.onCompleted(Nio2Acceptor.java:120)[125:org.apache.sshd.core:0.14.0]
        at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
        at java.security.AccessController.doPrivileged(Native Method)[:1.8.0_60]
        at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[125:org.apache.sshd.core:0.14.0]
        at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.8.0_60]
        at sun.nio.ch.Invoker$2.run(Invoker.java:218)[:1.8.0_60]
        at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)[:1.8.0_60]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_60]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_60]
        at java.lang.Thread.run(Thread.java:745)[:1.8.0_60]

After trying out all sort of tips read via google, finally realized: setting the JAVA_HOME fixes the issue. To solve for all users I added this line to /etc/profile:

export JAVA_HOME=/usr/local/openjdk7

You can also put the same line to $HOME/.profile to fix it only for one user.

Karaf is beautiful but this one was ugly. Well, my fault: starting Karaf actually complains about not setting JAVA_HOME (and that results may vary), but since I used it like that for quite a while, didn't expect this.

9/08/2015

Starting Karaf 4 results in java.lang.ClassNotFoundException: org.apache.karaf.main.Main

When you try to start Karaf and you'll get

#dell:apache-karaf-4.0.1 doma$ bin/karaf 
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/karaf/main/Main
Caused by: java.lang.ClassNotFoundException: org.apache.karaf.main.Main
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

...there's a good chance that it is caused by an incorrect KARAF_HOME:

#dell:apache-karaf-4.0.1 doma$ bin/karaf 
Error: Could not find or load main class org.apache.karaf.main.Main
#dell:apache-karaf-4.0.1 doma$ set | grep KARAF
KARAF_HOME=/opt/karaf
#dell:apache-karaf-4.0.1 doma$ unset KARAF_HOME
#dell:apache-karaf-4.0.1 doma$ set | grep KARAF
_=KARAF_HOME
#dell:apache-karaf-4.0.1 doma$ bin/karaf 
        __ __                  ____      
       / //_/____ __________ _/ __/      
      / ,<  / __ `/ ___/ __ `/ /_        
     / /| |/ /_/ / /  / /_/ / __/        
    /_/ |_|\__,_/_/   \__,_/_/         

  Apache Karaf (4.0.1)

Hit '' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '' or type 'system:shutdown' or 'logout' to shutdown Karaf.

karaf@root()>

7/16/2015

Building Hadoop 2.4.0 on Mac OS X Yosemite 10.10.3 with native components

Install pre-requisites

We'll need these for the actual build.

sudo port install cmake gmake gcc48 zlib gzip maven32 apache-ant

Install protobuf 2.5.0

As the current latest version in macports is 2.6.x, we need to stick to an earlier version:

cd ~/tools
svn co http://svn.macports.org/repository/macports/trunk/dports/devel/protobuf-cpp -r 105333
cd protobuf-cpp/
sudo port install

To verify:

protoc --version
# libprotoc 2.5.0

Acquire sources

As I needed an exact version for my work to reproduce an issue, I'll go with version 2.4.0 for now. I suppose some of the fixes will work with earlier or later versions as well. Look around in the tags folder for other versions.

cd ~/dev
svn co http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.0 hadoop-2.4.0
cd hadoop-2.4.0

Fix sources

We need to patch JniBasedUnixGroupsNetgroupMapping:

patch -p0 <<EOF
--- hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c.orig 2015-07-16 17:14:20.000000000 +0200
+++ hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c 2015-07-16 17:17:47.000000000 +0200
@@ -74,7 +74,7 @@
   // endnetgrent)
   setnetgrentCalledFlag = 1;
 #ifndef __FreeBSD__
-  if(setnetgrent(cgroup) == 1) {
+  setnetgrent(cgroup); {
 #endif
     current = NULL;
     // three pointers are for host, user, domain, we only care

EOF

As well as container-executor.c:

patch -p0 <<EOF
--- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c.orig 2015-07-16 17:49:15.000000000 +0200
+++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c 2015-07-16 18:13:03.000000000 +0200
@@ -498,7 +498,7 @@
   char **users = whitelist;
   if (whitelist != NULL) {
     for(; *users; ++users) {
-      if (strncmp(*users, user, LOGIN_NAME_MAX) == 0) {
+      if (strncmp(*users, user, 64) == 0) {
         free_values(whitelist);
         return 1;
       }
@@ -1247,7 +1247,7 @@
               pair);
     result = -1; 
   } else {
-    if (mount("none", mount_path, "cgroup", 0, controller) == 0) {
+    if (mount("none", mount_path, "cgroup", 0) == 0) {
       char *buf = stpncpy(hier_path, mount_path, strlen(mount_path));
       *buf++ = '/';
       snprintf(buf, PATH_MAX - (buf - hier_path), "%s", hierarchy);
@@ -1274,3 +1274,21 @@
   return result;
 }
 
+int fcloseall(void)
+{
+    int succeeded; /* return value */
+    FILE *fds_to_close[3]; /* the size being hardcoded to '3' is temporary */
+    int i; /* loop counter */
+    succeeded = 0;
+    fds_to_close[0] = stdin;
+    fds_to_close[1] = stdout;
+    fds_to_close[2] = stderr;
+    /* max iterations being hardcoded to '3' is temporary: */
+    for ((i = 0); (i < 3); i++) {
+ succeeded += fclose(fds_to_close[i]);
+    }
+    if (succeeded != 0) {
+ succeeded = EOF;
+    }
+    return succeeded;
+}

EOF

Install Oracle JDK 1.7

You'll need to install "Java SE Development Kit 7 (Mac OS X x64)" from Oracle. Then let's fix some things expected by the build at a different place:

export JAVA_HOME=`/usr/libexec/java_home -v 1.7`
sudo mkdir $JAVA_HOME/Classes
sudo ln -s $JAVA_HOME/lib/tools.jar $JAVA_HOME/Classes/classes.jar

Install Hadoop 2.4.0:

Sooner or later we've been expected to get here, right?

mvn package -Pdist,native -DskipTests -Dtar

If all goes well:

main:
     [exec] $ tar cf hadoop-2.4.0.tar hadoop-2.4.0
     [exec] $ gzip -f hadoop-2.4.0.tar
     [exec] 
     [exec] Hadoop dist tar available at: /Users/doma/dev/hadoop-2.4.0/hadoop-dist/target/hadoop-2.4.0.tar.gz
     [exec] 
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---
[INFO] Building jar: /Users/doma/dev/hadoop-2.4.0/hadoop-dist/target/hadoop-dist-2.4.0-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main ................................ SUCCESS [1.177s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [1.548s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [3.394s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.277s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [1.765s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [3.143s]
[INFO] Apache Hadoop MiniKDC ............................. SUCCESS [2.498s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [3.265s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [2.074s]
[INFO] Apache Hadoop Common .............................. SUCCESS [1:26.460s]
[INFO] Apache Hadoop NFS ................................. SUCCESS [4.527s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [0.032s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [2:09.326s]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [14.876s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [5.814s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [2.941s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.034s]
[INFO] hadoop-yarn ....................................... SUCCESS [0.034s]
[INFO] hadoop-yarn-api ................................... SUCCESS [57.713s]
[INFO] hadoop-yarn-common ................................ SUCCESS [20.985s]
[INFO] hadoop-yarn-server ................................ SUCCESS [0.040s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [6.935s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [12.889s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [2.362s]
[INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [4.059s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [11.368s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [0.467s]
[INFO] hadoop-yarn-client ................................ SUCCESS [4.109s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [0.043s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [2.123s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [1.902s]
[INFO] hadoop-yarn-site .................................. SUCCESS [0.030s]
[INFO] hadoop-yarn-project ............................... SUCCESS [3.828s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [0.069s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [19.507s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [13.039s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [2.232s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [7.625s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [6.198s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [5.440s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [1.534s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [4.577s]
[INFO] hadoop-mapreduce .................................. SUCCESS [2.903s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [3.509s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [6.723s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [1.705s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [4.460s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [3.330s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [2.585s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [2.361s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [9.603s]
[INFO] Apache Hadoop OpenStack support ................... SUCCESS [3.797s]
[INFO] Apache Hadoop Client .............................. SUCCESS [6.102s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.091s]
[INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [3.251s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [5.068s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [0.032s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [24.974s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 8:54.425s
[INFO] Finished at: Thu Jul 16 18:22:12 CEST 2015
[INFO] Final Memory: 173M/920M
[INFO] ------------------------------------------------------------------------

Using it

First we'll extract the results of our build. Then actually there is a little bit of configuration needed even for a single-cluster setup. Don't worry, I'll copy it here for your comfort ;-)

tar -xvzf /Users/doma/dev/hadoop-2.4.0/hadoop-dist/target/hadoop-2.4.0.tar.gz -C ~/tools

The contents of ~/tools/hadoop-2.4.0/etc/hadoop/core-site.xml:

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

The contents of ~/tools/hadoop-2.4.0/etc/hadoop/hdfs-site.xml:

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

Passwordless SSH

From the official docs:

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Starting up

Let's see what we've did. This is a raw copy from the official docs.

  1. Format the filesystem:
    bin/hdfs namenode -format
    
  2. Start NameNode daemon and DataNode daemon:
    sbin/start-dfs.sh
    

    The hadoop daemon log output is written to the $HADOOP_LOG_DIR directory (defaults to $HADOOP_HOME/logs).

  3. Browse the web interface for the NameNode; by default it is available at:
  4. Make the HDFS directories required to execute MapReduce jobs:
    bin/hdfs dfs -mkdir /user
    bin/hdfs dfs -mkdir /user/<username>
    
  5. Copy the input files into the distributed filesystem:
    bin/hdfs dfs -put etc/hadoop input
    

    Check if they are there at http://localhost:50070/explorer.html#/

  6. Run some of the examples provided (that's actually one line...):
    bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar grep input output 'dfs[a-z.]+'
    
  7. Examine the output files:

    Copy the output files from the distributed filesystem to the local filesystem and examine them:

    bin/hdfs dfs -get output output
    cat output/*
    

    or

    View the output files on the distributed filesystem:

    bin/hdfs dfs -cat output/*
    
  8. When you're done, stop the daemons with:
    sbin/stop-dfs.sh
    

Possible errors without the fixes & tweaks above

This list is an excerpt from my efforts during the build. They meant to drive you here via google ;-) Apply the procedure above and all of these errors will be fixed for you.

Without ProtoBuf

If you don't have protobuf, you'll get the following error:

[INFO] --- hadoop-maven-plugins:2.4.0:protoc (compile-protoc) @ hadoop-common ---
[WARNING] [protoc, --version] failed: java.io.IOException: Cannot run program "protoc": error=2, No such file or directory
[ERROR] stdout: []

Wrong version of ProtoBuf

If you don't have the correct version of protobuf, you'll get

[ERROR] Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: protoc version is 'libprotoc 2.6.1', expected version is '2.5.0' -> [Help 1]

CMAKE missing

If you don't have cmake, you'll get

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project hadoop-common: An Ant BuildException has occured: Execute failed: java.io.IOException: Cannot run program "cmake" (in directory "/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/target/native"): error=2, No such file or directory
[ERROR] around Ant part ...... @ 4:132 in /Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/target/antrun/build-main.xml

JAVA_HOME missing

If you don't have JAVA_HOME correctly set, you'll get

     [exec] -- Detecting CXX compiler ABI info
     [exec] -- Detecting CXX compiler ABI info - done
     [exec] -- Detecting CXX compile features
     [exec] -- Detecting CXX compile features - done
     [exec] CMake Error at /opt/local/share/cmake-3.2/Modules/FindPackageHandleStandardArgs.cmake:138 (message):
     [exec]   Could NOT find JNI (missing: JAVA_AWT_LIBRARY JAVA_JVM_LIBRARY
     [exec]   JAVA_INCLUDE_PATH JAVA_INCLUDE_PATH2 JAVA_AWT_INCLUDE_PATH)
     [exec] Call Stack (most recent call first):
     [exec]   /opt/local/share/cmake-3.2/Modules/FindPackageHandleStandardArgs.cmake:374 (_FPHSA_FAILURE_MESSAGE)
     [exec]   /opt/local/share/cmake-3.2/Modules/FindJNI.cmake:287 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
     [exec]   JNIFlags.cmake:117 (find_package)
     [exec]   CMakeLists.txt:24 (include)
     [exec] 
     [exec] 
     [exec] -- Configuring incomplete, errors occurred!
     [exec] See also "/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log".

JniBasedUnixGroupsNetgroupMapping.c patch missing

If you don't have the patch for JniBasedUnixGroupsNetgroupMapping.c above, you'll get

     [exec] [ 38%] Building C object CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c.o
     [exec] /Library/Developer/CommandLineTools/usr/bin/cc  -Dhadoop_EXPORTS -g -Wall -O2 -D_REENTRANT -D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -fPIC -I/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/target/native/javah -I/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/src/main/native/src -I/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/src -I/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/src/src -I/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/target/native -I/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/include/darwin -I/opt/local/include -I/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util    -o CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c.o   -c /Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
     [exec] /Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c:77:26: error: invalid operands to binary expression ('void' and 'int')
     [exec]   if(setnetgrent(cgroup) == 1) {
     [exec]      ~~~~~~~~~~~~~~~~~~~ ^  ~
     [exec] 1 error generated.
     [exec] make[2]: *** [CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c.o] Error 1
     [exec] make[1]: *** [CMakeFiles/hadoop.dir/all] Error 2
     [exec] make: *** [all] Error 2

fcloseall patch missing

Without applying the fcloseall patch above, you might get the following error:

     [exec] Undefined symbols for architecture x86_64:
     [exec]   "_fcloseall", referenced from:
     [exec]       _launch_container_as_user in libcontainer.a(container-executor.c.o)
     [exec] ld: symbol(s) not found for architecture x86_64
     [exec] collect2: error: ld returned 1 exit status
     [exec] make[2]: *** [target/usr/local/bin/container-executor] Error 1
     [exec] make[1]: *** [CMakeFiles/container-executor.dir/all] Error 2
     [exec] make: *** [all] Error 2

Symlink missing

Without the "export JAVA_HOME=`/usr/libexec/java_home -v 1.7`;sudo mkdir $JAVA_HOME/Classes;sudo ln -s $JAVA_HOME/lib/tools.jar $JAVA_HOME/Classes/classes.jar" line creating the symlinks above, you'll get

Exception in thread "main" java.lang.AssertionError: Missing tools.jar at: /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/Classes/classes.jar. Expression: file.exists()
 at org.codehaus.groovy.runtime.InvokerHelper.assertFailed(InvokerHelper.java:395)
 at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.assertFailed(ScriptBytecodeAdapter.java:683)
 at org.codehaus.mojo.jspc.CompilationMojoSupport.findToolsJar(CompilationMojoSupport.groovy:371)
 at org.codehaus.mojo.jspc.CompilationMojoSupport.this$4$findToolsJar(CompilationMojoSupport.groovy)
...

References:

http://java-notes.com/index.php/hadoop-on-osx

https://issues.apache.org/jira/secure/attachment/12602452/HADOOP-9350.patch

http://www.csrdu.org/nauman/2014/01/23/geting-started-with-hadoop-2-2-0-building/

https://developer.apple.com/library/mac/documentation/Porting/Conceptual/PortingUnix/compiling/compiling.html

https://github.com/cooljeanius/libUnixToOSX/blob/master/fcloseall.c

http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-common/SingleCluster.html


Install Apache Ant 1.8.1 via MacPorts

If the latest version of Apache Ant in MacPorts is not what you're after, you can try to downgrade. Here's how simple it is:

cd ~/tools
svn co http://svn.macports.org/repository/macports/trunk/dports/devel/apache-ant -r 74985
cd apache-ant/
sudo port install

To verify:

# ant -version
# Apache Ant version 1.8.1 compiled on April 30 2010

If you want to get other revisions, see here.

Switch between versions

But wait, there's more! You can even switch between the different versions installed easily. Actually this is what makes the whole process comfortable, no need to store different versions of tools at arbitrary locations...

sudo port activate apache-ant
--->  The following versions of apache-ant are currently installed:
--->      apache-ant @1.8.1_1 (active)
--->      apache-ant @1.8.4_0
--->      apache-ant @1.9.4_0
Error: port activate failed: Registry error: Please specify the full version as recorded in the port registry.

I have these three versions - I used 1.8.4 for the earlier Tomcat build, and 1.8.1 for the Hadoop build (the next post...)

But now that Hadoop is also built on OSX for my work, I switch back to the latest version:

sudo port activate apache-ant@1.9.4_0
--->  Deactivating apache-ant @1.8.1_1
--->  Cleaning apache-ant
--->  Activating apache-ant @1.9.4_0
--->  Cleaning apache-ant

To verify:

ant -version
# Apache Ant(TM) version 1.9.4 compiled on April 29 2014

Neat, huh?

6/18/2015

Downgrading MacPorts: use Ant 1.8.4 to build Tomcat 6 in Yosemite 10.10.3

Building Tomcat6 on MacPorts fails on building jakarta-taglibs-standard-11 due to Maven 1.9.4 present in the MacPorts repo. The error manifests itself like this:

...
    [javac] /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_java_jakarta-taglibs-standard-11/jakarta-taglibs-standard-11/work/jakarta-taglibs-standard-1.1.2-src/standard/build.xml:178: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds
    [javac] Compiling 236 source files to /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_java_jakarta-taglibs-standard-11/jakarta-taglibs-standard-11/work/jakarta-taglibs-standard-1.1.2-src/standard/build/standard/standard/classes
    [javac] Fatal Error: Unable to find package java.lang in classpath or bootclasspath
BUILD FAILED
/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_java_jakarta-taglibs-standard-11/jakarta-taglibs-standard-11/work/jakarta-taglibs-standard-1.1.2-src/standard/build.xml:178: Compile failed; see the compiler error output for details.
...

The build can be fixed by downgrading Ant:

cd ~
svn co -r 94758 http://svn.macports.org/repository/macports/trunk/dports/devel/apache-ant
cd apache-ant
sudo port install

Now tomcat can be installed from ports:

sudo port install tomcat6

Starting tomcat now shows we need some further customization:

# sudo port load tomcat6
# sudo less /opt/local/share/java/tomcat6/logs/catalina.err
2015-06-17 20:25:42.976 jsvc[587:5132] Apple AWT Java VM was loaded on first thread -- can't start AWT.
Jun 17, 2015 8:25:42 PM org.apache.catalina.startup.Bootstrap initClassLoaders
SEVERE: Class loader creation threw exception
java.lang.InternalError: Can't start the AWT because Java was started on the first thread.  Make sure StartOnFirstThread is not specified in your application's Info.plist or on the command line
        at java.lang.ClassLoader$NativeLibrary.load(Native Method)
        at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1833)
        at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1730)
        at java.lang.Runtime.loadLibrary0(Runtime.java:823)
        at java.lang.System.loadLibrary(System.java:1044)
        at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:50)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.awt.Toolkit.loadLibraries(Toolkit.java:1605)
        at java.awt.Toolkit.(Toolkit.java:1627)
        at sun.awt.AppContext$2.run(AppContext.java:240)
        at sun.awt.AppContext$2.run(AppContext.java:226)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.awt.AppContext.initMainAppContext(AppContext.java:226)
        at sun.awt.AppContext.access$200(AppContext.java:112)
        at sun.awt.AppContext$3.run(AppContext.java:306)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.awt.AppContext.getAppContext(AppContext.java:287)
        at com.sun.jmx.trace.Trace.out(Trace.java:180)
        at com.sun.jmx.trace.Trace.isSelected(Trace.java:88)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.isTraceOn(DefaultMBeanServerInterceptor.java:1830)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:929)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:916)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
        at com.sun.jmx.mbeanserver.JmxMBeanServer$2.run(JmxMBeanServer.java:1195)
        at java.security.AccessController.doPrivileged(Native Method)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.initialize(JmxMBeanServer.java:1193)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.(JmxMBeanServer.java:225)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.(JmxMBeanServer.java:170)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.newMBeanServer(JmxMBeanServer.java:1401)
        at javax.management.MBeanServerBuilder.newMBeanServer(MBeanServerBuilder.java:93)
        at javax.management.MBeanServerFactory.newMBeanServer(MBeanServerFactory.java:311)
        at javax.management.MBeanServerFactory.createMBeanServer(MBeanServerFactory.java:214)
        at javax.management.MBeanServerFactory.createMBeanServer(MBeanServerFactory.java:175)
        at sun.management.ManagementFactory.createPlatformMBeanServer(ManagementFactory.java:302)
        at java.lang.management.ManagementFactory.getPlatformMBeanServer(ManagementFactory.java:504)
        at org.apache.catalina.startup.Bootstrap.createClassLoader(Bootstrap.java:183)
        at org.apache.catalina.startup.Bootstrap.initClassLoaders(Bootstrap.java:92)
        at org.apache.catalina.startup.Bootstrap.init(Bootstrap.java:207)
        at org.apache.catalina.startup.Bootstrap.init(Bootstrap.java:275)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

That rings a bell. We need to run headless! To customize MacPort's Tomcat, edit setenv.local:

# sudo vi /opt/local/share/java/tomcat6/conf/setenv.local

This example is to use JDK1.7 and some self-signed certificate magic [setenv.local]

JAVA_JVM_VERSION=1.7
JAVA_OPTS="-Djava.awt.headless=true -XX:PermSize=500m -XX:MaxPermSize=800m -Xmx2g -Djavax.net.ssl.keyStore=/Users/doma/.keystore -Djavax.net.ssl.keyStorePassword=password -Djavax.net.ssl.trustStore=/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/jre/lib/security/cacerts -Djavax.net.ssl.trustStorePassword=changeit"

Restart Tomcat6:

sudo port unload tomcat6
sudo port load tomcat6

Did we fix it?

# sudo less /opt/local/share/java/tomcat6/logs/catalina.err
Jun 17, 2015 9:29:37 PM org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: .:/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java
Jun 17, 2015 9:29:37 PM org.apache.coyote.http11.Http11Protocol init
INFO: Initializing Coyote HTTP/1.1 on http-8080
Jun 17, 2015 9:29:37 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 507 ms
Jun 17, 2015 9:29:37 PM org.apache.catalina.core.StandardService start
INFO: Starting service Catalina
Jun 17, 2015 9:29:37 PM org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/6.0.35
Jun 17, 2015 9:29:37 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor host-manager.xml
Jun 17, 2015 9:29:38 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor manager.xml
Jun 17, 2015 9:29:38 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory docs
Jun 17, 2015 9:29:38 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory examples
Jun 17, 2015 9:29:38 PM org.apache.catalina.core.ApplicationContext log
INFO: ContextListener: contextInitialized()
Jun 17, 2015 9:29:38 PM org.apache.catalina.core.ApplicationContext log
INFO: SessionListener: contextInitialized()
Jun 17, 2015 9:29:38 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory ROOT
Jun 17, 2015 9:29:38 PM org.apache.coyote.http11.Http11Protocol start
INFO: Starting Coyote HTTP/1.1 on http-8080
Jun 17, 2015 9:29:38 PM org.apache.jk.common.ChannelSocket init
INFO: JK: ajp13 listening on /0.0.0.0:8009
Jun 17, 2015 9:29:38 PM org.apache.jk.server.JkMain start
INFO: Jk running ID=0 time=0/18  config=null
Jun 17, 2015 9:29:38 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 710 ms

Et voila. Tomcat started.

Background

The same process can be applied to downgrade anything in the ports tree. To find the proper release, see https://trac.macports.org/log/trunk/dports/devel/apache-ant

To see which versions you are currently having:

# sudo port installed apache-ant
The following ports are currently installed:
  apache-ant @1.8.4_0 (active)
  apache-ant @1.9.4_0

To use 1.9.4 again:

# sudo port activate apache-ant @1.9.4_0
--->  Deactivating apache-ant @1.8.4_0
--->  Cleaning apache-ant
--->  Activating apache-ant @1.9.4_0
--->  Cleaning apache-ant
# ant -version
Apache Ant(TM) version 1.9.4 compiled on April 29 2014

Reference: https://trac.macports.org/wiki/howto/InstallingOlderPort

6/01/2015

IntelliJ IDEA: pass JAVA_HOME, M2_HOME, MAVEN_OPTS to the IDE using Yosemite

Place the following content (enhance it to your taste obviously) to /Library/LaunchDaemons/setenv.MAVEN_OPTS.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
  <plist version="1.0">
  <dict>
  <key>Label</key>
  <string>setenv.BAR</string>
  <key>ProgramArguments</key>
  <array>
    <string>/bin/launchctl</string>
    <string>setenv</string>
    <string>MAVEN_OPTS</string>
    <string>-XX:PermSize=500m -XX:MaxPermSize=800m -Xmx2g -Djavax.net.ssl.keyStore=/Users/doma/.keystore -Djavax.net.ssl.keyStorePassword=password -Djavax.net.ssl.trustStore=/Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/jre/lib/security/cacerts -Djavax.net.ssl.trustStorePassword=changeit</string>
  </array>
  <key>RunAtLoad</key>
  <true/>
  <key>ServiceIPC</key>
  <false/>
</dict>
</plist>

You'll have to either restart your computer or run the following line to apply the changes:

launchctl load -w /Library/LaunchDaemons/setenv.MAVEN_OPTS.plist

The next candidate is M2_HOME, the file to create is /Library/LaunchDaemons/setenv.M2_HOME.plist :

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
  <plist version="1.0">
  <dict>
  <key>Label</key>
  <string>setenv.BAR</string>
  <key>ProgramArguments</key>
  <array>
    <string>/bin/launchctl</string>
    <string>setenv</string>
    <string>M2_HOME</string>
    <string>/opt/local/share/java/apache-maven-3.1.1</string>
  </array>
  <key>RunAtLoad</key>
  <true/>
  <key>ServiceIPC</key>
  <false/>
</dict>
</plist>

Again, restart or run the following to apply:

launchctl load -w /Library/LaunchDaemons/setenv.M2_HOME.plist

...and here's the proof that you don't have to set the value of the M2_HOME after each Maven project import:

Reference: http://www.dowdandassociates.com/blog/content/howto-set-an-environment-variable-in-mac-os-x-launchd-plist/


5/17/2015

Migrating SVN repositories to GIT

mkdir ~/mig
cd ~/mig
rm -rf parent
git svn clone file:///var/svn/repos/parent --stdlayout --no-metadata
cd parent
mv .git/refs/remotes/trunk .git/refs/heads
mv .git/refs/remotes/tags .git/refs/tags

git remote add origin git@java-notes.com:parent.git
git push origin --all

3/24/2015

FreeBSD: Consult installed ports' package message to see their requirements after installation

At installation time, ports display their pkg message to warn the user of further configuration. If for any reason you missed it (or just want to be sure you've done all the changes required for your system), you can display the messages for all ports:
pkg info -D -x `pkg query %n` | less
Excerpt of the results:
...
baobab-3.14.1:
bash-4.3.33:
======================================================================

bash requires fdescfs(5) mounted on /dev/fd

If you have not done it yet, please do the following:

        mount -t fdescfs fdesc /dev/fd

To make it permanent, you need the following lines in /etc/fstab:

        fdesc   /dev/fd         fdescfs         rw      0       0

======================================================================
bdftopcf-1.0.4:
bigreqsproto-1.1.2:
binutils-2.25:
bison-2.7.1,1:
...
Here's how to see how many ports you have installed:
pkg info | wc -l
Enjoy!

3/09/2015

IntelliJ Idea live templates: code for singletons

Say you have this class:

public class TerminalTitlebar {
    public void set(String title) {
        System.out.println("\033]0;" + title + "\007");
    }
}

...and you want to introduce code to change this into a singleton easily.

File -> Settings -> Live Templates -> Click + to create a new entry.

Enter i for abbreviation, "singleton instance" or something similar for description, and the code in the "Template text" field:

private static $CLASS_NAME$ instance;
public static $CLASS_NAME$ getInstance() {
    return instance == null ? instance = new $CLASS_NAME$() : instance;
}

Now click "Edit Variables", and change the Expression for CLASS_NAME to "className()", press OK

To tell IntelliJ to use the generated snippet in Java code, click "Change" in the "Applicable in" line, and select Java.

Now type the letter i (this was the abbreviation we used earlier) and press TAB and the magic happens:

public class TerminalTitlebar {
    private static TerminalTitlebar instance;

    public static TerminalTitlebar getInstance() {
        return instance == null ? instance = new TerminalTitlebar() : instance;
    }

    public void set(String title) {
        System.out.println("\033]0;" + title + "\007");
    }
}

Voila, you can use it in any class. Enjoy!


1/19/2015

Automatic logger generation using IntelliJ Idea's Live Templates (use it with log4j, slf4j, commons-logging, etc by changing the template line accordingly)

In IntelliJ Idea, go to File -> Settings -> Live Templates, and click the + sign. Fill in the details:

  • Abbreviation: log
  • Description: log
  • Check "Reformat according to style" will indent the generated line appropriately.
  • Checking "Shorten FQ names" will remove "org.apache.log4j." and replace it with an import.

"Template text" is

private static final org.apache.log4j.Logger LOGGER = org.apache.log4j.Logger.getLogger($CLASS_NAME$.class);

The same for slf4j is

private final static Logger log = org.slf4j.LoggerFactory.getLogger($CLASS_NAME$.class);

Now click "Edit Variables", and in the CLASS_NAME row, enter className() in the Expression column.

Almost done - we still need to tell IDEA that we need this for Java stuff. Where it says "No applicable contexts", click "Define" - click JAVA, and OK.

Open any of your classes, and into a newline under "public class Blah", type "log" and press tab - and the logger line will be created appropriately:

private static final Logger LOGGER = Logger.getLogger(HSQLSequenceGenerator.class);