Showing posts with label OSX. Show all posts
Showing posts with label OSX. Show all posts

7/16/2015

Building Hadoop 2.4.0 on Mac OS X Yosemite 10.10.3 with native components

Install pre-requisites

We'll need these for the actual build.

sudo port install cmake gmake gcc48 zlib gzip maven32 apache-ant

Install protobuf 2.5.0

As the current latest version in macports is 2.6.x, we need to stick to an earlier version:

cd ~/tools
svn co http://svn.macports.org/repository/macports/trunk/dports/devel/protobuf-cpp -r 105333
cd protobuf-cpp/
sudo port install

To verify:

protoc --version
# libprotoc 2.5.0

Acquire sources

As I needed an exact version for my work to reproduce an issue, I'll go with version 2.4.0 for now. I suppose some of the fixes will work with earlier or later versions as well. Look around in the tags folder for other versions.

cd ~/dev
svn co http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.0 hadoop-2.4.0
cd hadoop-2.4.0

Fix sources

We need to patch JniBasedUnixGroupsNetgroupMapping:

patch -p0 <<EOF
--- hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c.orig 2015-07-16 17:14:20.000000000 +0200
+++ hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c 2015-07-16 17:17:47.000000000 +0200
@@ -74,7 +74,7 @@
   // endnetgrent)
   setnetgrentCalledFlag = 1;
 #ifndef __FreeBSD__
-  if(setnetgrent(cgroup) == 1) {
+  setnetgrent(cgroup); {
 #endif
     current = NULL;
     // three pointers are for host, user, domain, we only care

EOF

As well as container-executor.c:

patch -p0 <<EOF
--- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c.orig 2015-07-16 17:49:15.000000000 +0200
+++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c 2015-07-16 18:13:03.000000000 +0200
@@ -498,7 +498,7 @@
   char **users = whitelist;
   if (whitelist != NULL) {
     for(; *users; ++users) {
-      if (strncmp(*users, user, LOGIN_NAME_MAX) == 0) {
+      if (strncmp(*users, user, 64) == 0) {
         free_values(whitelist);
         return 1;
       }
@@ -1247,7 +1247,7 @@
               pair);
     result = -1; 
   } else {
-    if (mount("none", mount_path, "cgroup", 0, controller) == 0) {
+    if (mount("none", mount_path, "cgroup", 0) == 0) {
       char *buf = stpncpy(hier_path, mount_path, strlen(mount_path));
       *buf++ = '/';
       snprintf(buf, PATH_MAX - (buf - hier_path), "%s", hierarchy);
@@ -1274,3 +1274,21 @@
   return result;
 }
 
+int fcloseall(void)
+{
+    int succeeded; /* return value */
+    FILE *fds_to_close[3]; /* the size being hardcoded to '3' is temporary */
+    int i; /* loop counter */
+    succeeded = 0;
+    fds_to_close[0] = stdin;
+    fds_to_close[1] = stdout;
+    fds_to_close[2] = stderr;
+    /* max iterations being hardcoded to '3' is temporary: */
+    for ((i = 0); (i < 3); i++) {
+ succeeded += fclose(fds_to_close[i]);
+    }
+    if (succeeded != 0) {
+ succeeded = EOF;
+    }
+    return succeeded;
+}

EOF

Install Oracle JDK 1.7

You'll need to install "Java SE Development Kit 7 (Mac OS X x64)" from Oracle. Then let's fix some things expected by the build at a different place:

export JAVA_HOME=`/usr/libexec/java_home -v 1.7`
sudo mkdir $JAVA_HOME/Classes
sudo ln -s $JAVA_HOME/lib/tools.jar $JAVA_HOME/Classes/classes.jar

Install Hadoop 2.4.0:

Sooner or later we've been expected to get here, right?

mvn package -Pdist,native -DskipTests -Dtar

If all goes well:

main:
     [exec] $ tar cf hadoop-2.4.0.tar hadoop-2.4.0
     [exec] $ gzip -f hadoop-2.4.0.tar
     [exec] 
     [exec] Hadoop dist tar available at: /Users/doma/dev/hadoop-2.4.0/hadoop-dist/target/hadoop-2.4.0.tar.gz
     [exec] 
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---
[INFO] Building jar: /Users/doma/dev/hadoop-2.4.0/hadoop-dist/target/hadoop-dist-2.4.0-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main ................................ SUCCESS [1.177s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [1.548s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [3.394s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.277s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [1.765s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [3.143s]
[INFO] Apache Hadoop MiniKDC ............................. SUCCESS [2.498s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [3.265s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [2.074s]
[INFO] Apache Hadoop Common .............................. SUCCESS [1:26.460s]
[INFO] Apache Hadoop NFS ................................. SUCCESS [4.527s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [0.032s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [2:09.326s]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [14.876s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [5.814s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [2.941s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.034s]
[INFO] hadoop-yarn ....................................... SUCCESS [0.034s]
[INFO] hadoop-yarn-api ................................... SUCCESS [57.713s]
[INFO] hadoop-yarn-common ................................ SUCCESS [20.985s]
[INFO] hadoop-yarn-server ................................ SUCCESS [0.040s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [6.935s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [12.889s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [2.362s]
[INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [4.059s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [11.368s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [0.467s]
[INFO] hadoop-yarn-client ................................ SUCCESS [4.109s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [0.043s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [2.123s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [1.902s]
[INFO] hadoop-yarn-site .................................. SUCCESS [0.030s]
[INFO] hadoop-yarn-project ............................... SUCCESS [3.828s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [0.069s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [19.507s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [13.039s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [2.232s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [7.625s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [6.198s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [5.440s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [1.534s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [4.577s]
[INFO] hadoop-mapreduce .................................. SUCCESS [2.903s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [3.509s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [6.723s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [1.705s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [4.460s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [3.330s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [2.585s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [2.361s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [9.603s]
[INFO] Apache Hadoop OpenStack support ................... SUCCESS [3.797s]
[INFO] Apache Hadoop Client .............................. SUCCESS [6.102s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.091s]
[INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [3.251s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [5.068s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [0.032s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [24.974s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 8:54.425s
[INFO] Finished at: Thu Jul 16 18:22:12 CEST 2015
[INFO] Final Memory: 173M/920M
[INFO] ------------------------------------------------------------------------

Using it

First we'll extract the results of our build. Then actually there is a little bit of configuration needed even for a single-cluster setup. Don't worry, I'll copy it here for your comfort ;-)

tar -xvzf /Users/doma/dev/hadoop-2.4.0/hadoop-dist/target/hadoop-2.4.0.tar.gz -C ~/tools

The contents of ~/tools/hadoop-2.4.0/etc/hadoop/core-site.xml:

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

The contents of ~/tools/hadoop-2.4.0/etc/hadoop/hdfs-site.xml:

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

Passwordless SSH

From the official docs:

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Starting up

Let's see what we've did. This is a raw copy from the official docs.

  1. Format the filesystem:
    bin/hdfs namenode -format
    
  2. Start NameNode daemon and DataNode daemon:
    sbin/start-dfs.sh
    

    The hadoop daemon log output is written to the $HADOOP_LOG_DIR directory (defaults to $HADOOP_HOME/logs).

  3. Browse the web interface for the NameNode; by default it is available at:
  4. Make the HDFS directories required to execute MapReduce jobs:
    bin/hdfs dfs -mkdir /user
    bin/hdfs dfs -mkdir /user/<username>
    
  5. Copy the input files into the distributed filesystem:
    bin/hdfs dfs -put etc/hadoop input
    

    Check if they are there at http://localhost:50070/explorer.html#/

  6. Run some of the examples provided (that's actually one line...):
    bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar grep input output 'dfs[a-z.]+'
    
  7. Examine the output files:

    Copy the output files from the distributed filesystem to the local filesystem and examine them:

    bin/hdfs dfs -get output output
    cat output/*
    

    or

    View the output files on the distributed filesystem:

    bin/hdfs dfs -cat output/*
    
  8. When you're done, stop the daemons with:
    sbin/stop-dfs.sh
    

Possible errors without the fixes & tweaks above

This list is an excerpt from my efforts during the build. They meant to drive you here via google ;-) Apply the procedure above and all of these errors will be fixed for you.

Without ProtoBuf

If you don't have protobuf, you'll get the following error:

[INFO] --- hadoop-maven-plugins:2.4.0:protoc (compile-protoc) @ hadoop-common ---
[WARNING] [protoc, --version] failed: java.io.IOException: Cannot run program "protoc": error=2, No such file or directory
[ERROR] stdout: []

Wrong version of ProtoBuf

If you don't have the correct version of protobuf, you'll get

[ERROR] Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: protoc version is 'libprotoc 2.6.1', expected version is '2.5.0' -> [Help 1]

CMAKE missing

If you don't have cmake, you'll get

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project hadoop-common: An Ant BuildException has occured: Execute failed: java.io.IOException: Cannot run program "cmake" (in directory "/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/target/native"): error=2, No such file or directory
[ERROR] around Ant part ...... @ 4:132 in /Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/target/antrun/build-main.xml

JAVA_HOME missing

If you don't have JAVA_HOME correctly set, you'll get

     [exec] -- Detecting CXX compiler ABI info
     [exec] -- Detecting CXX compiler ABI info - done
     [exec] -- Detecting CXX compile features
     [exec] -- Detecting CXX compile features - done
     [exec] CMake Error at /opt/local/share/cmake-3.2/Modules/FindPackageHandleStandardArgs.cmake:138 (message):
     [exec]   Could NOT find JNI (missing: JAVA_AWT_LIBRARY JAVA_JVM_LIBRARY
     [exec]   JAVA_INCLUDE_PATH JAVA_INCLUDE_PATH2 JAVA_AWT_INCLUDE_PATH)
     [exec] Call Stack (most recent call first):
     [exec]   /opt/local/share/cmake-3.2/Modules/FindPackageHandleStandardArgs.cmake:374 (_FPHSA_FAILURE_MESSAGE)
     [exec]   /opt/local/share/cmake-3.2/Modules/FindJNI.cmake:287 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
     [exec]   JNIFlags.cmake:117 (find_package)
     [exec]   CMakeLists.txt:24 (include)
     [exec] 
     [exec] 
     [exec] -- Configuring incomplete, errors occurred!
     [exec] See also "/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log".

JniBasedUnixGroupsNetgroupMapping.c patch missing

If you don't have the patch for JniBasedUnixGroupsNetgroupMapping.c above, you'll get

     [exec] [ 38%] Building C object CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c.o
     [exec] /Library/Developer/CommandLineTools/usr/bin/cc  -Dhadoop_EXPORTS -g -Wall -O2 -D_REENTRANT -D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -fPIC -I/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/target/native/javah -I/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/src/main/native/src -I/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/src -I/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/src/src -I/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/target/native -I/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/include/darwin -I/opt/local/include -I/Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util    -o CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c.o   -c /Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
     [exec] /Users/doma/dev/hadoop-2.4.0/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c:77:26: error: invalid operands to binary expression ('void' and 'int')
     [exec]   if(setnetgrent(cgroup) == 1) {
     [exec]      ~~~~~~~~~~~~~~~~~~~ ^  ~
     [exec] 1 error generated.
     [exec] make[2]: *** [CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c.o] Error 1
     [exec] make[1]: *** [CMakeFiles/hadoop.dir/all] Error 2
     [exec] make: *** [all] Error 2

fcloseall patch missing

Without applying the fcloseall patch above, you might get the following error:

     [exec] Undefined symbols for architecture x86_64:
     [exec]   "_fcloseall", referenced from:
     [exec]       _launch_container_as_user in libcontainer.a(container-executor.c.o)
     [exec] ld: symbol(s) not found for architecture x86_64
     [exec] collect2: error: ld returned 1 exit status
     [exec] make[2]: *** [target/usr/local/bin/container-executor] Error 1
     [exec] make[1]: *** [CMakeFiles/container-executor.dir/all] Error 2
     [exec] make: *** [all] Error 2

Symlink missing

Without the "export JAVA_HOME=`/usr/libexec/java_home -v 1.7`;sudo mkdir $JAVA_HOME/Classes;sudo ln -s $JAVA_HOME/lib/tools.jar $JAVA_HOME/Classes/classes.jar" line creating the symlinks above, you'll get

Exception in thread "main" java.lang.AssertionError: Missing tools.jar at: /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/Classes/classes.jar. Expression: file.exists()
 at org.codehaus.groovy.runtime.InvokerHelper.assertFailed(InvokerHelper.java:395)
 at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.assertFailed(ScriptBytecodeAdapter.java:683)
 at org.codehaus.mojo.jspc.CompilationMojoSupport.findToolsJar(CompilationMojoSupport.groovy:371)
 at org.codehaus.mojo.jspc.CompilationMojoSupport.this$4$findToolsJar(CompilationMojoSupport.groovy)
...

References:

http://java-notes.com/index.php/hadoop-on-osx

https://issues.apache.org/jira/secure/attachment/12602452/HADOOP-9350.patch

http://www.csrdu.org/nauman/2014/01/23/geting-started-with-hadoop-2-2-0-building/

https://developer.apple.com/library/mac/documentation/Porting/Conceptual/PortingUnix/compiling/compiling.html

https://github.com/cooljeanius/libUnixToOSX/blob/master/fcloseall.c

http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-common/SingleCluster.html


Install Apache Ant 1.8.1 via MacPorts

If the latest version of Apache Ant in MacPorts is not what you're after, you can try to downgrade. Here's how simple it is:

cd ~/tools
svn co http://svn.macports.org/repository/macports/trunk/dports/devel/apache-ant -r 74985
cd apache-ant/
sudo port install

To verify:

# ant -version
# Apache Ant version 1.8.1 compiled on April 30 2010

If you want to get other revisions, see here.

Switch between versions

But wait, there's more! You can even switch between the different versions installed easily. Actually this is what makes the whole process comfortable, no need to store different versions of tools at arbitrary locations...

sudo port activate apache-ant
--->  The following versions of apache-ant are currently installed:
--->      apache-ant @1.8.1_1 (active)
--->      apache-ant @1.8.4_0
--->      apache-ant @1.9.4_0
Error: port activate failed: Registry error: Please specify the full version as recorded in the port registry.

I have these three versions - I used 1.8.4 for the earlier Tomcat build, and 1.8.1 for the Hadoop build (the next post...)

But now that Hadoop is also built on OSX for my work, I switch back to the latest version:

sudo port activate apache-ant@1.9.4_0
--->  Deactivating apache-ant @1.8.1_1
--->  Cleaning apache-ant
--->  Activating apache-ant @1.9.4_0
--->  Cleaning apache-ant

To verify:

ant -version
# Apache Ant(TM) version 1.9.4 compiled on April 29 2014

Neat, huh?

6/18/2015

Downgrading MacPorts: use Ant 1.8.4 to build Tomcat 6 in Yosemite 10.10.3

Building Tomcat6 on MacPorts fails on building jakarta-taglibs-standard-11 due to Maven 1.9.4 present in the MacPorts repo. The error manifests itself like this:

...
    [javac] /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_java_jakarta-taglibs-standard-11/jakarta-taglibs-standard-11/work/jakarta-taglibs-standard-1.1.2-src/standard/build.xml:178: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds
    [javac] Compiling 236 source files to /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_java_jakarta-taglibs-standard-11/jakarta-taglibs-standard-11/work/jakarta-taglibs-standard-1.1.2-src/standard/build/standard/standard/classes
    [javac] Fatal Error: Unable to find package java.lang in classpath or bootclasspath
BUILD FAILED
/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_java_jakarta-taglibs-standard-11/jakarta-taglibs-standard-11/work/jakarta-taglibs-standard-1.1.2-src/standard/build.xml:178: Compile failed; see the compiler error output for details.
...

The build can be fixed by downgrading Ant:

cd ~
svn co -r 94758 http://svn.macports.org/repository/macports/trunk/dports/devel/apache-ant
cd apache-ant
sudo port install

Now tomcat can be installed from ports:

sudo port install tomcat6

Starting tomcat now shows we need some further customization:

# sudo port load tomcat6
# sudo less /opt/local/share/java/tomcat6/logs/catalina.err
2015-06-17 20:25:42.976 jsvc[587:5132] Apple AWT Java VM was loaded on first thread -- can't start AWT.
Jun 17, 2015 8:25:42 PM org.apache.catalina.startup.Bootstrap initClassLoaders
SEVERE: Class loader creation threw exception
java.lang.InternalError: Can't start the AWT because Java was started on the first thread.  Make sure StartOnFirstThread is not specified in your application's Info.plist or on the command line
        at java.lang.ClassLoader$NativeLibrary.load(Native Method)
        at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1833)
        at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1730)
        at java.lang.Runtime.loadLibrary0(Runtime.java:823)
        at java.lang.System.loadLibrary(System.java:1044)
        at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:50)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.awt.Toolkit.loadLibraries(Toolkit.java:1605)
        at java.awt.Toolkit.(Toolkit.java:1627)
        at sun.awt.AppContext$2.run(AppContext.java:240)
        at sun.awt.AppContext$2.run(AppContext.java:226)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.awt.AppContext.initMainAppContext(AppContext.java:226)
        at sun.awt.AppContext.access$200(AppContext.java:112)
        at sun.awt.AppContext$3.run(AppContext.java:306)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.awt.AppContext.getAppContext(AppContext.java:287)
        at com.sun.jmx.trace.Trace.out(Trace.java:180)
        at com.sun.jmx.trace.Trace.isSelected(Trace.java:88)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.isTraceOn(DefaultMBeanServerInterceptor.java:1830)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:929)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:916)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
        at com.sun.jmx.mbeanserver.JmxMBeanServer$2.run(JmxMBeanServer.java:1195)
        at java.security.AccessController.doPrivileged(Native Method)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.initialize(JmxMBeanServer.java:1193)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.(JmxMBeanServer.java:225)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.(JmxMBeanServer.java:170)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.newMBeanServer(JmxMBeanServer.java:1401)
        at javax.management.MBeanServerBuilder.newMBeanServer(MBeanServerBuilder.java:93)
        at javax.management.MBeanServerFactory.newMBeanServer(MBeanServerFactory.java:311)
        at javax.management.MBeanServerFactory.createMBeanServer(MBeanServerFactory.java:214)
        at javax.management.MBeanServerFactory.createMBeanServer(MBeanServerFactory.java:175)
        at sun.management.ManagementFactory.createPlatformMBeanServer(ManagementFactory.java:302)
        at java.lang.management.ManagementFactory.getPlatformMBeanServer(ManagementFactory.java:504)
        at org.apache.catalina.startup.Bootstrap.createClassLoader(Bootstrap.java:183)
        at org.apache.catalina.startup.Bootstrap.initClassLoaders(Bootstrap.java:92)
        at org.apache.catalina.startup.Bootstrap.init(Bootstrap.java:207)
        at org.apache.catalina.startup.Bootstrap.init(Bootstrap.java:275)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

That rings a bell. We need to run headless! To customize MacPort's Tomcat, edit setenv.local:

# sudo vi /opt/local/share/java/tomcat6/conf/setenv.local

This example is to use JDK1.7 and some self-signed certificate magic [setenv.local]

JAVA_JVM_VERSION=1.7
JAVA_OPTS="-Djava.awt.headless=true -XX:PermSize=500m -XX:MaxPermSize=800m -Xmx2g -Djavax.net.ssl.keyStore=/Users/doma/.keystore -Djavax.net.ssl.keyStorePassword=password -Djavax.net.ssl.trustStore=/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/jre/lib/security/cacerts -Djavax.net.ssl.trustStorePassword=changeit"

Restart Tomcat6:

sudo port unload tomcat6
sudo port load tomcat6

Did we fix it?

# sudo less /opt/local/share/java/tomcat6/logs/catalina.err
Jun 17, 2015 9:29:37 PM org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: .:/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java
Jun 17, 2015 9:29:37 PM org.apache.coyote.http11.Http11Protocol init
INFO: Initializing Coyote HTTP/1.1 on http-8080
Jun 17, 2015 9:29:37 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 507 ms
Jun 17, 2015 9:29:37 PM org.apache.catalina.core.StandardService start
INFO: Starting service Catalina
Jun 17, 2015 9:29:37 PM org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/6.0.35
Jun 17, 2015 9:29:37 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor host-manager.xml
Jun 17, 2015 9:29:38 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor manager.xml
Jun 17, 2015 9:29:38 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory docs
Jun 17, 2015 9:29:38 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory examples
Jun 17, 2015 9:29:38 PM org.apache.catalina.core.ApplicationContext log
INFO: ContextListener: contextInitialized()
Jun 17, 2015 9:29:38 PM org.apache.catalina.core.ApplicationContext log
INFO: SessionListener: contextInitialized()
Jun 17, 2015 9:29:38 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory ROOT
Jun 17, 2015 9:29:38 PM org.apache.coyote.http11.Http11Protocol start
INFO: Starting Coyote HTTP/1.1 on http-8080
Jun 17, 2015 9:29:38 PM org.apache.jk.common.ChannelSocket init
INFO: JK: ajp13 listening on /0.0.0.0:8009
Jun 17, 2015 9:29:38 PM org.apache.jk.server.JkMain start
INFO: Jk running ID=0 time=0/18  config=null
Jun 17, 2015 9:29:38 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 710 ms

Et voila. Tomcat started.

Background

The same process can be applied to downgrade anything in the ports tree. To find the proper release, see https://trac.macports.org/log/trunk/dports/devel/apache-ant

To see which versions you are currently having:

# sudo port installed apache-ant
The following ports are currently installed:
  apache-ant @1.8.4_0 (active)
  apache-ant @1.9.4_0

To use 1.9.4 again:

# sudo port activate apache-ant @1.9.4_0
--->  Deactivating apache-ant @1.8.4_0
--->  Cleaning apache-ant
--->  Activating apache-ant @1.9.4_0
--->  Cleaning apache-ant
# ant -version
Apache Ant(TM) version 1.9.4 compiled on April 29 2014

Reference: https://trac.macports.org/wiki/howto/InstallingOlderPort

6/01/2015

IntelliJ IDEA: pass JAVA_HOME, M2_HOME, MAVEN_OPTS to the IDE using Yosemite

Place the following content (enhance it to your taste obviously) to /Library/LaunchDaemons/setenv.MAVEN_OPTS.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
  <plist version="1.0">
  <dict>
  <key>Label</key>
  <string>setenv.BAR</string>
  <key>ProgramArguments</key>
  <array>
    <string>/bin/launchctl</string>
    <string>setenv</string>
    <string>MAVEN_OPTS</string>
    <string>-XX:PermSize=500m -XX:MaxPermSize=800m -Xmx2g -Djavax.net.ssl.keyStore=/Users/doma/.keystore -Djavax.net.ssl.keyStorePassword=password -Djavax.net.ssl.trustStore=/Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/jre/lib/security/cacerts -Djavax.net.ssl.trustStorePassword=changeit</string>
  </array>
  <key>RunAtLoad</key>
  <true/>
  <key>ServiceIPC</key>
  <false/>
</dict>
</plist>

You'll have to either restart your computer or run the following line to apply the changes:

launchctl load -w /Library/LaunchDaemons/setenv.MAVEN_OPTS.plist

The next candidate is M2_HOME, the file to create is /Library/LaunchDaemons/setenv.M2_HOME.plist :

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
  <plist version="1.0">
  <dict>
  <key>Label</key>
  <string>setenv.BAR</string>
  <key>ProgramArguments</key>
  <array>
    <string>/bin/launchctl</string>
    <string>setenv</string>
    <string>M2_HOME</string>
    <string>/opt/local/share/java/apache-maven-3.1.1</string>
  </array>
  <key>RunAtLoad</key>
  <true/>
  <key>ServiceIPC</key>
  <false/>
</dict>
</plist>

Again, restart or run the following to apply:

launchctl load -w /Library/LaunchDaemons/setenv.M2_HOME.plist

...and here's the proof that you don't have to set the value of the M2_HOME after each Maven project import:

Reference: http://www.dowdandassociates.com/blog/content/howto-set-an-environment-variable-in-mac-os-x-launchd-plist/


3/03/2010

Configuring Ubuntu to mount a shared HFS partition

If you have followed along the earlier series of this article, you may have already met one of the limitations of Ubuntu's HFS support: it can't write journaled HFS volumes. Thus, it makes sense to partition the drives with non-case-sensitive and non-journaled HFS where there is a need to read/write. You can also manually disable journaling if you already finished partitioning and you've had the journaled option turned on.

A decision to take: move the home folder, or make symbolic links?
In earlier iterations of my dual-boot configurations, I had my OSX home folder moved to the shared partition. This works quite well, at least up to the point when the system starts up after a "dirty shutdown" when the regular shutdown procedure was not executed properly: like when the computer was frozen and was killed with the power switch (or facing a power-outage on an iMac, or running out of battery on macbooks). In these cases, a disk-check was performed during startup, but since the check can't finish until the computer boots up (and OSX does not wait for the check to finish, rendering the shared volume inaccessible temporarly), we may face an empty home-folder after a dirty shutdown (so you'll face a brand-new configuration, like you have just finished installation, since it uses the unmounted /Users/username folder). This can be fixed by a logout-login, nevertheless it is not nice.

Not to mention if we do the same with Ubuntu: if the home folder is moved to the shared partition and the partition is dirty on boot-up, it will be read-only; if I'm not mistaken this has resulted being not able to login to the system at all. This sounds really bad, so in my experience the best option is to leave the home folders where they are (e.g. Ubuntu will have its' home folder on the EXT3 drive, OSX on the HFS drive), and we'll make symbolic links to the documents, pictures, videos folders on the shared partition. Until we keep our documents where they belong, they will be on the shared drive.

Mounting an HFS volume on Ubuntu

There are two ways to identify a partition to mount on linux: either by the device's name (like /dev/sda1) or by its UUID, which is just a random number. The UUID option is better, since a partition will be identified even if its' name changes (like when the number of partitions changes on a drive: /dev/sda3 won't be /dev/sda3 anymore as soon as we remove /dev/sda1 and /dev/sda2).

To see some information about your partitions, you can use the command blkid:


In this screenshot (don't let yourself fooled by the appearance, this is Ubuntu having a nice gnome-theme applied, thus looks a bit like OSX, pretty neat huh? See here how to install it) it is apparent that I have four partitions, and I will have to mount the last one. I will mount /dev/sda4. Yours can be different, so double-check.

To mount a partition, first we have to create the folder where it will be mounted. Open a terminal, and enter
sudo mkdir /mnt/shared
sudo chmod 777 /mnt/shared
Now we'll add an entry in /etc/fstab. Press Alt+F2 to bring up the Run Application dialog, and enter
gksudo gedit /etc/fstab

Enter your password when it's been asked. This will bring up the contents of the fstab file in the text-editor using the proper rights to edit it (you can use vi of course but then you won't need my assistance in any case...) Add a line to the end of this file:
/dev/sda4 /mnt/shared       hfsplus    rw        0       2
Of course, replacing my /dev/sda4 with the correct entry if needed. Save the file & exit the editor.

Using UUIDs
You could also use UUID to identify the partition; I don't know why, it didn't work for me. To acquire the UUID, copy the UUID from the blkid command (or, if you are using Jaunty you can use the sudo vol_id /dev/sda4 command too), and add the fstab line like this:

UUID=aae739de-bfb8-39d6-b60a-a6e47222e74a /mnt/shared       hfsplus    rw        0       2
Somehow, even though it worked the first time, it didn't work after a reboot, so I've had to fall back to the name. If you figured out the reason, please let us know!

Now we have everything in place. If you reboot, this should mount automagically, but you can mount it right away by entering
sudo mount -a
to a terminal-window. This mounts all filesystems mentioned in fstab - since all others are mounted anyway, ours will be mounted now. To see your mounted filesystems enter
sudo mount
Hopefully your shared partition will be on the list. To shorten the list you can also use
sudo mount | grep sda4

Preparing the shared partition

The shared partition will contain our documents, pictures, music, videos, etc. Now that we have our shared partition mounted, we can create symbolic links from our home folder, so that we don't have to navigate to the shared folder manually.

First let's create the folder for our documents:
mkdir /mnt/shared/doma
mkdir /mnt/shared/doma/Documents
Don't forget to replace my name with yours of course.
sudo or not to sudo? Using multiple accounts
We don't need sudo here, since we want to keep our credentials; by repeating this same process for another user using another name, his/her documents will be on the shared partition as well. Just don't forget to logout/login with the other user's account in this case.

Now just to be sure we don't lose anything, we can move all our may-be-existing documents already:
mv ~/Documents/* /mnt/shared/doma/Documents
Now the Documents folder should be empty. We can verify it with
ls ~/Documents
It should be empty. If it is, we are ready to create the symbolic link.

Creating the symbolic link

To create the symbolic link, enter
ln -sf /mnt/shared/doma/Documents ~
in the terminal. ln -s creates a symbolic link, the f parameter replaces the Documents folder if it exists (it does). Even if it looks like (and we wouldn't have moved our documents earlier), it does not destroy anything under the old /home/doma/Documents: if we remove the link, the contents of the (now hidden) folder would reappear. But since we've been careful and moved everything out of the folder earlier, it is empty anyway.

This same procedure can be repeated with the Pictures, Videos and other folders.

Moving the folders on OSX

In order to have a nice and clean system, we should move these folders in OSX as well. Thus, if we'll go to the Documents folder, both OSX and Ubuntu will go to the same folder.

The procedure is quite similar to the previous one. Let's create a small shell-script which will help us:
mkdir /Volumes/shared/doma/$1
mv ~/$1/* /Volumes/shared/doma/$1 && sudo rm -rf ~/$1
ln -s /Volumes/shared/doma/$1 ~
Copy these lines to TextEdit, make it Plain Text (format menu) and save the file in your home folder using the name mv.sh - and don't forget to replace my name with yours! This shell-script automates all the necessary steps for OSX. We'll have to make it executable first with
chmod +x mv.sh
Now we can execute it for all the folders we want to move. For the Documents folder, enter
./mv.sh Documents
You can do the same with your Movies, Music, Pictures folder if you like - but take iTunes' and iPhoto's proprietary folder-structure into account, you'll see the details of their file-storage in Ubuntu (that's linux anyway and supposed to be lower level right?). In any case, you'll have access to your files - and if you use some other, possibly multi-platform tool to manage your pictures and music, you may end up having a quite usable multi-platform system.

Drop a comment if it works for you, tell us how you've configured your system - and don't hesitate to ask if you have a question!

Thanks for dropping by - see y'all next time!






11/11/2009

Reinstall OSX, and re-partition from scratch with the installer for dual booting with Ubuntu

(This is the third part of the article Dual booting OSX and Ubuntu without REFIT)

My only problem with resizing the original OSX Snow Leopard installation is that it comes with lots of stuff I'm not using and this consumes quite some valuable disk-space. Do you really want to sacrifice 1.2GB on Language Transitions and 1.62GB on printer drivers if you don't need it? (Well, the printer drivers you may need, I have a Canon printer for which the driver is not in OSX, so I'll install separately anyway - thus I can get rid of this 1.62GB). Since I only have a 128GB drive to do the installation and I will install Logic Studio consuming some 60gigs, I prefer the reinstallation. Let's see how it works!

First and foremost, don't forget to back-up all your valuable data, since this method will destroy everything on the hard-drive. Don't forget: neither me nor anybody else can and will take responsibility for what you are doing on your own computer!

So pop in that CD (or USB key - did you know you can install Snow Leopard from an USB key?) and start the installation.

First select your language and click the arrow to proceed. In the "Install Mac OS X" window, don't continue just yet; first we will have to re-partition our hard drive. So click Utilities->Disk Utility. Click on your drive, and click on the "Partition" tab. In the "Volume Scheme" dropdown, select "4 Partitions". This will split your drive to four, equally sized partitions. Good enough to start with!

My preferred layout is the following:

osx As I will have Logic Studio installed which will take like 60 gigs, I will have to leave a huge partition for OSX. So I'll make it 74GB (I tell you how I came up with this number: I tried before and Logic just did not fit on 64GB :-) If you don't have any huge programs like Logic Pro or Final Cut, 20GB should do for the OSX partition with Office.
home Will be visible from both Ubuntu and OSX, containing the documents we are working with: videos, pictures, source-files, etc; thus, it should fill up the remaining space from all the other partitions. Since the HFS driver in Linux is not able to read/write journaled HFS partitions, we will have to change the type of this partition to non-journaled HFS. It should be non-case-sensitive as well, since Adobe products are fooled with case-sensitive file-systems. So, it will be "Mac OS Extended", which is "non-case-sensitive/non-journaled".
ubuntu The Ubuntu partition will contain the Linux OS. 16GB will do for Karmic Koala.
swap The swap partition is the virtual memory; you have to set the size depending on the amount of RAM you have. I have 4GBs, and since I want to use Hibernation I have to set it to at least 4GB (see [Swap partition size for 4GB RAM - Super User]). If hibernation is not needed, you can go away with 1GB as well.

You should come up with a layout of your own, depending on your actual situation, HDD size, OSX & Linux usages.
I have a 128GB drive, so this is how it will look like:
  • osx(74GB)
  • home(34GB).
  • ubuntu(16GB)
  • swap(5GB)
Click on the bottom-rectangle which says "Untitled 4". This selects the partition for the swap. Enter "swap" in the Name, MS-DOS in the Format, and 5GB in the Size field.

Click on the next rectangle, which says "Untitled 3". This is the partition for the Linux filesystem. Enter "ubuntu" in the Name, MS-DOS in Format and 16GB in the Size field.

Click on the next rectangle saying "Untitled 2". This is the shared partition. Enter "home" in the Name field, as size enter 34GB (or whatever you have calculated as your Shared partition size)

Click on the topmost rectangle which says "Untitled 1". This will be the partition of our OSX installation. Enter "osx" in the Name field; the size should be already fine since it equals to the remaining of all our previous partition-sizes.

Now you can click on any of the rectangles to verify your partition-sizes. If you are content, press Apply. This will do the partitioning in no time.

When it is finished, you can close Disk Utility. In the Install Mac OS X window, press Continue. Agree the Licence Agreement after reading it through, then select your "Macintosh HD" as a target disk. If you click Customize, you can remove the Printer Support if you don't have a printer or if have your own driver to install after OSX installation; as well as the Additional Fonts if you don't use those languages. I usually deselect the Language Transitions since I'm using OSX in english anyway. X11 I usually keep, but that's just because I may need it if I end up doing development in OSX.

So just click OK and Install.

After the OSX installation is finished, you can continue and Install Ubuntu 9.10 "Karmic Koala" on an already partitioned drive.











Install Ubuntu 9.10 "Karmic Koala" on an already partitioned drive

(This is the fourth part of the article Dual booting OSX and Ubuntu without REFIT)

To me it seems 64bit code runs smoother on today's hardware, so I'm using the 64bit editions if I have the chance. All Apple hardware containing Core2Duo (or better, e.g. Core i5, Core i7) processors is capable of running 64bit code; so here we're going to install Ubuntu 9.10 Desktop 64bit also known as Karmic Koala (under "Alternative download options" you'll find the 64bit edition). The nice thing is that it doesn't really matter: there is hardly any difference in setting up 32 or 64 bit Linux distributions.

Somehow installing Ubuntu thru USB didn't work for me on a Macbook Pro 4.1. So I have burnt the image to a CD and installed it from there: pop in and boot from the CD (keep pressing Alt when you turn on your Mac and select the CD), and select "English" then "Try Ubuntu without any change to my computer". Sounds careful doesn't it?

Ubuntu starts up as a live distro. Double-click on the icon Install Ubuntu 9.10. Select the language to use and click Forward.


Select your timezone and click forward.


Select your keyboard layout and click forward. On a mac, it makes sense to select USA - Macintosh.


Now comes the interesting part, partitioning. Select Specify partitions manually (advanced) and click Forward.


Since OSX is using GPT instead of MBR for partitioning our drives, you can see some strange things here, like the 200MB EFI partition (/dev/sda1) or the 134MB disk spaces between the partitions. This is fine, we just have to "see through the lines". If we write the OSX names next to the partitions everything will be clear:

/dev/sda1209 MBEFI
/dev/sda274356 MBosx
/dev/sda334000 MBhome
/dev/sda416000 MBubuntu
/dev/sda54131 MBswap

So, let's mount /dev/sda4 to "/" which will be the root of our Linux filesystem. Double-click on the line beginning with /dev/sda4, in Use as select EXT4 journaling filesystem, click Format the partition, in the Mount Point select "/" and click OK.


Now we're going to create a swap partition. Doucle-click on the /dev/sda5 line, select swap area under Use as, and click OK.


We can't select HFS in the installer, so we'll have to mount the shared "home" partition later on ourselves by the fstab file. For now, we have a root partition and a swap partition, so just click forward!


Fill in all the necessary parameters in the intimidating "Who are you" dialog and click forward again.


In the Ready to install dialog, click Install (GRUB will go into MBR, which is fine). The installation commences and will be finished in no time. Press Restart Now.

When the Mac starts up, you'll have to hold the Alt key for the operating system selector menu to pop up. Oddly the additional entry we have next to our Macintosh HD is Windows; apparently Apple thinks if there's anything next to its operating system, that will be Windows. So just select Windows and press ENTER and we are booting Ubuntu!

In the next post, we will discuss how to configure Ubuntu to have access to the Shared partition and we'll set up some symbolic links in both OSX and Ubuntu to store our Documents, Pictures, Music and Videos in the common folder, so that we have access to them no matter which one we start up.











Resize your existing OSX partitions to make some space for Ubuntu and a "Shared" partition

(For the background, see Dual booting OSX and Ubuntu on a Macbook PRO)

To squeeze Ubuntu to my Macbook Pro I actually prefer reinstalling from scratch since Apple installs quite some unused stuff. But, if you don't mind losing some gigabytes, just proceed here, this is the "less destructive" way to go since your original operating system (and thus your home directory with all your documents) remains intact if the partitioning proceeds without error.

Please take my word as warning: Although I'm using dual and triple boot computers for years and the described procedure works for me (in fact I'm doing it whilst writing this article), I can't take any responsibility that it will work for you as well, and I can't take responsibility for any damage or loss you may encounter. If you still choose to follow me, please, backup your home folder NOW.

I'll tell you a short story: about three years ago, I decided that EXT3 is the way to go on my 1.8TB RAID containing about 800GB of data. It was NTFS, and since there was no utility to convert NTFS partitions to EXT3 and I didn't have a 1TB drive at my disposal (at the time 300GB was the top HDD size, at least in what I could have bought for my hardly-earned money), so I decided to take the risk and proceeded without backup. Using GParted I planned to decrease the size of the NTFS partition to 900GB, and would have created a 900GB EXT3 behind it, would have copied everything to the EXT3, then would have deleted the NTFS & would have resized the EXT3 to use the whole drive, having 1.8TB again. If I'm not mistaken everything went well up till the last step: the NTFS was copied, was deleted, and the EXT3 resize failed.

Early version of GParted? Data inconsistency? No clue. But the process was stuck, and after a day (you can imagine that day!) I had to stop it and start looking for ways to retrieve my data. Fortunately I was able to get back almost everything from the deleted NTFS partition and by now I have external drives backing up my RAID with rsync regularly (and I'm thinking about migrating my backups to EC2 or another cheap cloud-provider); I have learnt from my own mistake, so you should learn too: MAKE BACKUPS when dealing with such sensitive issues like filesystem-resizing operations.

Let's go! Start Applications->Utilities->Disk Utility and click on your Hard Drive on the left panel. Now on the right side, click Partition.



If you have your original installation, probably you will have one big partition named "Macintosh HD" like me here. We will have to shrink this partition so that our little Karmic Koala fits here too. My preferred layout is the following:

OSX As I will have Logic Studio installed which will take like 60 gigs, I will have to leave a huge partition for OSX. So I'll make it 74GB (I tell you how I came up with this number: I tried before and Logic just did not fit on 64GB :-)
Shared Will be visible from both Ubuntu and OSX, containing the documents we are working with: videos, pictures, source-files, etc. Thus, it should fill up the remaining space from all the other partitions. Since the HFS driver in Linux is not able to read/write journaled HFS partitions, we will have to change the type of this partition to non-journaled HFS. It should be non-case-sensitive, since Adobe products are fooled with case-sensitive file-systems. So, it will be "Mac OS Extended", which is "non-case-sensitive/non-journaled".
Ubuntu The Ubuntu partition will contain the Linux OS. 16GB will do for Karmic Koala.
Swap The swap partition is the virtual memory; you have to set the size depending on the amount of RAM you have. I have 4GBs, and since I want to use Hibernation I have to set it to at least 4GB (see [Swap partition size for 4GB RAM - Super User]). If hibernation is not needed, you can go away with 1GB as well.

I have a 128GB drive, so this is how it will look like:
  • OSX(74GB)
  • Shared(34GB).
  • Ubuntu(16GB)
  • Swap(4GB)

Before starting to partition the drive, I advise to sit down in front of an empty paper and try to come up with the proper numbers of your own. If you only go to the net with OSX, and you'll just install OpenOffice, 16GB should do for the OSX partition. Ubuntu is also quite happy with the 16GB, so if you sacrify 1GB for SWAP (in case you don't mind Ubuntu's hibernation), you can allocate all the remaining space to the SHARED partition, since it can contain all your documents, pictures, movies, music; reachable from either Ubuntu or OSX. The main advantages of this setup are:
  • if you happen to reinstall either OSX or Ubuntu later on, and you only store your data on the shared partition, you won't lose anything
  • you only have to work out a regular backup-procedure to the SHARED partition, since everything on the OSX and UBUNTU partition can be regenerated by a reinstallation.
To create the partition layout (don't bother with the size just yet - we'll change that in a sec), we will have to click on the rectangle representing the HDD (which says "Macintosh HD"), and then click the [+] sign next to the dimmed "Option" button, which makes a new partition. This results in two partitions:



Now click on the bottom rectangle on the left side ("Macintosh HD 2" on the picture) and click on the [+] sign again. We have three partitions:


Click on the bottom rectangle again ("Macintosh HD 2 2" above), and [+] again. We have four partitions finally:


So far, so good - we will just have to fix the sizes and the names now. Apple's Disk Utility is a tricky one, since if we change the size of a partition on top, it also changes the size of the partitions below. So we'll go from the bottom.

Click on the last partition on the left side ("Macintosh HD 2 2 2" above) Triple click on the Name textbox to select its contents and enter the new name: "Swap". Although the name we enter here will be erased with the filesystem itself when we install Koala, we'll enter the names so that we clearly see what we are doing. The format can stay since we'll delete this partition anyway (and we don't have the choice to select Linux Swap anyway), click on size and enter your preferred swap size. I will enter 4.4GB, and I hope this fixes the hibernation which didn't work in previous installations. Press Enter to update the rectangles:


Three more to go. Click on the rectangle above the one we just worked with ("Macintosh HD 2 2 1" above) to select it. This is the partition for the Linux Operating System. Enter a descriptive "Name" used only whilst partitioning: "Ubuntu". Format can stay again since the partition will be erased anyway by the Ubuntu installer. The size is 16GB. Enter:


With me yet? Let's go. Click on the rectangle "Macintosh HD 2 1", and enter "Shared" in the Name: field. Now click on the Format drop-down list and select "Mac OS Extended" (Do you remember? This is the non-case-sensitive/non-journaled HFS stuff from above, so that Linux can read and write it). The size is 34GB (or whatever your Shared partition size will be). Now press Enter and the rectangles will be updated:


Now you can freely click on any of the rectangles (e.g. partitions) to verify their size. If you are satisfied with the results, click Apply.


Smart Snow Leopard confirms the non-destructive manner of our operations. Since you have already made plenty of backups of your data, you can click on the Partition button right? After making sure your backups are intact, click Partition to proceed.

Now OSX verifies the disk, and executes the partitioning. In a couple of seconds/minutes, you'll see the results:



Now you can proceed to Install Ubuntu 9.10 "Karmic Koala" on an already partitioned drive.

Dual booting OSX and Ubuntu on a Macbook Pro

In these series of posts we'll talk about how to set up Mac OS X 10.6 "Snow Leopard" to dual boot with Ubuntu 9.10 "Karmic Koala" on a Macbook Pro (the procedure will probably work with previous and newer versions as well - feedback is always welcome!) This setup has quite some "nuances" next to the technical issues concerning the installation, e.g. how to set up the home folders, using a shared partition, etc.

Usage scenarios

We have quite some operating systems to choose from, each with their strengths and weaknesses. If I try to summarize my usage of computer systems, I'm doing mainly the following stuff:
  • Net
    • Browsing, Skype, Messenger: OSX and Linux
  • Media (mainly OSX)
    • Pictures: RAW to JPG workflow: Nikon Capture and Apple Aperture on OSX
    • Audio editing: recording and mixing down rehearsals, one track per instrument: Apple Logic Studio on OSX.
    • DVD and CD remastering: grabbing DVDs, CDs, re-compressing them to XVID/MP3: Linux is unbeatable with all the free tools.
  • Software Development (mainly Linux)
    • Java: Eclipse, IntelliJ Idea: Eclipse already in the Ubuntu repositories, thus looks like "integrated" in Ubuntu.
    • Tomcat: from Ubuntu repos
    • PostreSQL, MySQL: from Ubuntu repos

Net

For browsing the net Firefox is my favorite next to Opera (btw, 56% of the visitors of this site use Firefox - with Explorer having some 20%...). Both Opera and Firefox run fine either on Linux or OSX, so no real preference here. Same with Skype. Concerning Messenger I have the option to use Adium on OSX (which, configured properly is beautiful like OSX itself and can be used with MSN, Facebook chat, Google talk, etc) or Pidgin and Empathy on Ubuntu, which can also be used for MSN, ICQ, Yahoo messenger, etc. Thus, we can go to the Net with either OS; we even have some options to synchronize the settings like bookmarks between the browsers.

Media

Personally, I like to work with audio on OSX since Linux is - at least today - unfortunately not the best option for this purpose: although Logic Studio is pricey, but IMO it's simply the best tool to perform Audio editing (not to mention the painful lack of the Linux driver for my MOTU Ultralite). My other option would be to use Windows (Sonar), but I'm using mainly Mac and Linux lately, and I just don't like the "instability" feeling of Windows anymore. Rumors say Windows 7 will change this, but I'll just give it some time before trying it. Somehow I'm more enthusiastic about the new Karmic Koala and Snow Leopard thingy for now.

For the photography raw workflow (converting Nikon D50 raw NEF images to JPGs), currently I only have experience with Nikon Capture (I used it on XP for years) which, albeit not so nicely but also works on OSX. After following some tutorials on Apple's site, I'm planning to migrate to Apple's Aperture in the future since it looks promising. But, for now I'm using Nikon Capture. So that's also OSX.

DVDs and CDs: the open source tools in the Ubuntu repos are unbeatable. Just google for "ubuntu cd rip" or "ubuntu dvd rip" and you'll see what I mean: everything built in the repos, no shareware/freeware utilities to haunt for... So Linux is the clear winner here for the moment.

Software development

My two favorite Java IDEs in use today are IntelliJ Idea and Eclipse (the third main one is Netbeans, but I dont't use it lately, although I remember I liked it quite much too). Both IDEs run perfectly either on OSX or Linux, but - at least for me - life is just easier if software development is done always in Linux. It is true, we have all the open-source stuff as "macports" (you can go as far as entering "sudo port install tomcat6" in a command line on OSX, which will download the sources of tomcat6, compile it and install it), yet after developing for years both on OSX and Linux I still have the feeling that in Linux everything is just where it should be, while on OSX I always have to figure something out. Yes, everything can be figured out, yet my "Just works" credit goes to Linux when software development is concerned. So I use Ubuntu for software development.

Where is windows?
Windows could be just as easily integrated into a triple boot environment as OSX and Ubuntu. To be exact, previously when I've had a 320GB HDD in this Macbook, I've had Windows too since Nikon Capture was running best in Windows; but since that was the only usage lately and I had to live with a smaller HDD for a while, I've had to make some sacrifices and Windows had to go... Basicly all we need is one more partition for Windows. But, quite oddly the Macbook was only able to start Windows with REFIT if it was on a specific partition, I don't remember if the 3rd or 4th.

Installation

So you've got your new Mac and you wish to install Ubuntu on it. To have more than one operating system, you'll have to create some additional partitions. You have two ways to proceed with the partitioning:
After the partitioning is done, you can go ahead and Install Ubuntu 9.10 "Karmic Koala" on an already partitioned drive.

In the next post, we will discuss how to configure Ubuntu to have access to the Shared partition and we'll set up some symbolic links in both OSX and Ubuntu to store our Documents, Pictures, Music and Videos in the common folder, so that we have access to them no matter which one we start up.