Tuesday, July 7, 2015

Native Mobile vs Hybrid Mobile = ItsNat Droid

The eternal question, what or when is better? Native Mobile ? Hybrid Mobile ?

I have some very personal opinions, not sure to be "universal arguments".

Go on:

  • Language

    • Native Mobile:  built using "serious langs" like Swift (iOS) or Java (Android). Serious means: statically typed, good OOP and speed.
    • Hybrid Mobile: based on the horrible JavaScript (weak types, bad OOP, not so quick than native langs), unfortunately if your application UI is sophisticated you must be ready to tons of clumsy JS. In pure Web you have GWT, Dart... to generate JS, in mobile web I suppose is a lot of harder to avoid JS.

  • UI fidelity to platform

    • Native Mobile: ready in your hands. Just using the default native components you get a perfect native UI look and feel.
    • Hybrid Mobile: you must use a mobile toolkit simulating native UIs and make two visual versions (iOS/Android) of parts of your app.

  • Access to the native API

    • Native Mobile: direct and unlimited access (only security restrictions of the app are applied).
    • Hybrid Mobile: you must use a mobile toolkit because native access from JavaScript is not unlimited, for instance in Android the native (Java) methods to be called by JS code must be defined in a interface, this is the job of the mobile toolkit API. If you need a stronger integration be ready to program in native Java. In fact the term "hybrid" is used because many hybrid applications are really hybrid native-web, that is, some architectural elements are native (menues etc) and other parts web rendered. Some things are very annoying like the necessary asynchronous calling between JS and native code (for instance, in Android the JS code is executed in an exclusive thread different to the main thread).

  • Development Performance


Maybe this is the most opinionated item of this analysis.

In my opinion native development is by far more productive than hybrid thanks to the language, the tools, the natural native integration... Yes, you must make two (iOS/Android) apps, usually two teams. The performance of both teams surpasses in quality and development performance to the "one hybrid team". Some things can be reused like data management using a Java-Objective C generation tool like Google does. I recognize that if you need to support Windows Mobile the advantage is not so great.

  • Debugging/Testing

    • Native Mobile: native tools are fine enough.
    • Hybrid Mobile: debugging tools have been improved but not in the same level than native.

  • Version management


In this subject we must differentiate between two types of hybrid applications:

  1. Behavior/UI (HTML/JS) is mainly local. In essence the hybrid app is self-contained and is not different to a native app.
  2. Behavior/UI is mainly remotely delivered. In essence the hybrid app is like a mobile web site packaged into a native app.

If your application is type 2 you are lucky, version management is by far easier than a native app, in a native app minor UI changes and behavior requires a new release and you must keep back compatibility with old versions because upgrading is a task of the end user. This is where hybrid development can shine.

I suspect Amazon Shop apps are of this type 2:

http://www.theserverside.com/news/2240174316/How-Amazon-discovered-hybrid-HTML5-Java-Android-app-development

"the most compelling reason to incorporate HTML5 into your mobile applications is to get the ability to update the app without requiring an upgrade on the device user's side. This capability makes it both easier and safer to manage apps -- permitting developers to roll out or draw back updates as needed. In the brave new world of continuous deployment and live testing, that's a huge advantage"

And by the way, this type 2 is the reason of ItsNat Droid development.

ItsNat Droid: native UI (and some behavior) delivered remotely.

Enjoy!


Wednesday, June 17, 2015

My pseudo psycho analysis of the "three types of developers" regarding tech adoption

I'm usually obsessed about what type of developer I am, a pioneer adopting tech? conservative? a creative/seller of my own software? The answer is more complex.

This reflection is the result of reading this article at DZone, it was first a comment, but WTF! I HAVE A BLOG AT LAST!

There are three kind of people in software development regarding tech adoption:

1) People obsessed looking for the newest and coolest technology

2) More conservative/skeptical people (usually experienced developers and/or with some "bad experiences")

3) People interested on promoting some concrete tech (people with an agenda)

This is a pseudo naive psycho analysis:

1) People obsessed looking for the newest and coolest technology

They are proud of being "pioneers" and early adopters, sometimes they are good enough to make anything in any kind of tech including not mature or simply wrong tech, but sometimes he/she will ruin your project because the motivation is usually too ego-centric and project "needs" and using proven stuff are not the priority.

The problem with this people is their pervasive promotion of unproven or wrong tech, and ready to "believe on any new thing".

I remember people proud of "using Java from 1.0", when I read this I feel a shake. Java was A GREAT INVENT, Java avoided Microsoft as the absolute dominant player in software industry (Java was born when Microsoft was near to becoming a brutal monopoly in anything, Java "saved" the industry, just ask to old Oracle, Sun, IBM guys...). In the same time Java 1.0 was really mediocre in the alpha category (1.1 was the first decent release)...

Apply to any new cool tech of today...

2) More conservative/skeptical people

There are several sub-types:

- Lazy people unable to learn something new

- Conservative people very resilient to change (fear driven)

- Skeptical people: aged developers who have seen too much tech "died" (C++ decline, XML and Object databases, CORBA, SOAP, native desktop development, RUP...) tired of listening "world savers", "everything will be" and "silver bullets" again and again.

3) People interested on promoting some concrete tech (people with an agenda)

In practice they are "Bible sellers" where everything is a "success case", their points and arguments can be valid but they are biased... Sometimes the tech "sold" becomes very good and get some success (or great success) with time but ever even in alpha state is sold as the best stuff after the sliced bread invention. Sometimes the merit of the tech is obtained just by "mantra repetition" (frequent repetition of "X is good" becomes "X is good") and "hiding information", and inviting to imitation to avoid "IT shame" (we are cool are you cool?)

----------------- My point of view:

I recognize I'm a mix of three :)

1) People obsessed looking for the newest and coolest technology

This kind of people must exist for promoting innovation. Sometimes there are really exciting new tech around the corner "yes at last".

In my case I've tried to create this kind of disruptive tech for this kind of pioneers early adopters, most of the time I've failed but I still have some hope .

Yes I'm in1)

2) More conservative/skeptical people

This kind of people must exist to provide a rational point of view. For instance to resist to use DBs with almost no warranty for critical data, to avoid applying tech and approaches only worth and valid for very big companies, to resist to use alpha tech in mission critical apps...

For instance, I've read THOUSANDS of times how much bad is sync IO and thread scheduling and how performant is async IO processing, but data is not usually aligned to speeches.

Yes I'm in 2)

3) People interested on promoting some concrete tech (people with an agenda)

Yes they are biased, sometimes they will revolutionize the industry, sometimes is just a temporally marketing success, in any case they try "to convert you and to spread the WORD".

This kind of people must exist to create exciting new tech, for instance Sun, Oracle, IBM... survived thanks to Java invention. Yes they want to make money and it is not bad, is up to you what is your role (follower?, fanatic?, ingenuous re-seller?, a satisfied user/customer?).

Yes I'm in 3) I create and promote open source software, sometime ago I tried to make a living, now is mainly pleasure driven.

And you?

Enjoy

Friday, March 13, 2015

JEPLayer, Java 8 and jOOQ a Match Made in Swiss Heaven


Note: this blog entry is a tutorial of JEPLayer, is alive and has been updated to JEPLayer v1.3, ignore the date of this blog entry.

Last update: 2015-6-3

Why?

Sometime ago I promised to Lukas Eder to make some example using JEPLayer and jOOQ, Lukas is the main author of such nice and powerful RDBMS Java toolset. A promise is a duty!

What?

This blog entry is a tutorial of how we can use jOOQ runtime SQL generation with the ORM API of JEPLayer. jOOQ is a complete ORM, JEPLayer is also an ORM, they provide two different point of view of solving the Object-Relational problem in a Java environment.

JEPLayer is a non-intrusive approach to persistence of pure POJOs, a lightweight alternative to more intrusive and "magic" approaches like JPA implementations. JEPLayer tries to simplify to extreme the JDBC lifecycle hiding the JDBC API, but through listeners we can optionally access to the underlying pure JDBC layer avoiding as much as possible the API repetition of JDBC in JEPLayer. JEPLayer has a special focus on managing JDBC and JTA transactions.

In spite of JEPLayer ORM capabilities, a powerful SQL syntax modeled by Java objects is missing in JEPLayer (instead of plain text), jOOQ SQL modeling in Java is probably the most complete in the world. jOOQ is a perfect match for JEPLayer to write robust, error-free, refactoring friendly SQL code based on Java.

JEPLayer v1.3 introduces automatic implicit SQL generation for INSERT, UPDATE and DELETE actions, no explicit SQL is needed, anyway jOOQ SQL in Java is still invaluable for SELECT queries where complexity can be cumbersome and error prone.

In this tutorial the use of jOOQ is brief, SQL examples are very simple, SQL management of jOOQ is much more powerful than shown here.

The other new kid on the block is Java 8 (Java 1.8 specification), specially interesting for JEPLayer are streams and lambdas. JEPLayer is designed using a fluid API customized with listeners most of them based on a single method, perfect to get code simplified using lambdas, in the same time, JEPLayer returns "alive" result sets based on List (and ListIterator) interfaces, and as you know a List can be easily converted to a stream in Java 8 for some type of processing. The code of this tutorial could be even less verbose when using lambdas, but some extra unnecessary variables are exposed to show the name of the interface listener involved, because remember, this is a JEPLayer tutorial (not a Java 8 tutorial).

The first part of this tutorial just show the typical DAO class based on JEPLayer to manage a POJO class (Contact). One important feature of JEPLayer is the absolute respect to the user data model, no annotation, interface or similar artifact is needed in data model, data model is clean and independent from the persistence layer (JEPLayer), only some optional Java bean property conventions are required to easy class and attribute mapping to table and columns.

The second part of this tutorial shows some use examples of persistent actions grouped, most of them using JDBC transactions (no JTA API of JEPLayer is used in this example). Most of the code is repetitive (code similar doing the same) because the motivation is to show the extreme JDBC customization allowed by JEPLayer, especially in transaction management.

Where?

The code of this example can be found in:

https://github.com/jmarranz/jeplayer_examples/tree/master/jeplayer_jooq

Requisites

This example has been coded in NetBeans using Maven, Java 8 JDK, MySQL and the C3PO connection pool. JEPLayer does not mandate a concrete connection pool, just to show a "real world" configuration.

Because there is no custom SQL code, any other RDBMS database could be used with no business code change (just changing data source bootstrap).

OK now show me the code!

No more blah blah. The methodology of this tutorial is simple, let's to show the code first, and later we will explain the details of every code snippet.

The code of this tutorial just have two parts, the DAO class and use examples managing a Contact POJO class and the second part talks about transactions.

We are presenting first the ultra complex Contact class

The DAO class managing Contact objects


Finally the use cases


Let's Explain: initialization

To initialize JEPLayer (non-JTA) we just need a DataSource:

to finally obtain a JEPLNonJTADataSource object, this object wraps the provided DataSource. In fact JEPLayer is basically a set of wrappers on top of JDBC, nothing new, the "new" JEPLayer specific part is you ever have optional access to original JDBC objects when you need some specific configuration and behavior, depending of the phase of the persistent lifecycle, avoiding re-inventing JDBC again and again.

As you can see jOOQ initialization for MySQL is trivial because we are going to use just a subset of its capabilities.

The previous code configures by default the JEPLNonJTADataSource root object to disable transactions using a JEPLConnectionListener, this is the verbose version of jds.setDefaultAutoCommit(true), in fact it is also unnecessary because transactions are disabled by default (later we are going to see many options to execute transactions with no need of enable them by default, in a ideal world all persistent actions must be inside transactions but in practice only changing actions should).


The method createTables() shows how to execute SQL raw code  using a JEPLDAL (DAL=Data Access Layer) object. A JEPLDAL object can be a singleton (the same as JEPLDAO objects in fact this interface inherits from JEPLDAL) and is designed to execute persistent actions when you do not need to convert requested data to POJOs.



Let's see how ContactDAO is initialized:

This constructor creates a JEPLDAO<Contact> object, this object can be a singleton and can be used to manage Contact objects, besides implementing the interface JEPLDAL, JEPLDAO<T> provides methods to manage the persistence of the class T specified, in this case Contact.

The constructor is very verbose to show the options of mapping columns and attributes, the parameter mappingMode decides the approach used for mapping. In this example all approaches are the same, all attributes are mapped to the columns with the same name ignoring case. The first one is enough and the simplest in this case:

The JEPLUpdateDAOListenerDefault when registered will be used to internally generate the SQL code and parameters to execute JEPLDAO<Contact>.insert(Contact obj)/update(Contact obj)/delete(Contact obj) methods.

The JEPLResultSetDAOListenerDefault when registered will be used to create the POJOs mapped to resulting rows when executing DAO queries.

If you need more complex bindings and data transformation use other more specific approach. The JEPLResultSetDAOBeanMapper is interesting when most of them match by default but someone need a custom binding or excluding.


Inserting persistent objects

Insertion example:

This is the insert() method in DAO:

Because we are not going to return Contact objects, this method uses a DAL query.The values "email", "name", "phone" are non-sense values, they are required by jOOQ and will be replaced by ?, if you need to provide inline values use inline("some.real@email.com") and similar as parameters (this is jOOQ specific). jOOQ generates parameters with format ? , JEPLayer also allows parameters with :name format to avoid "counting accidents", because of jOOQ they are not shown in this tutorial (see the JEPLayer Manual). Because of insertion we finally call the method getGeneratedKey() calling under the hood the similar JDBC method.

There is another example of insertion, the result is the same but it is defined to show how we can optionally modify how the results are processed (in this case only one row and column, the generated id, is expected):


This specified JEPLResultSetDALListener could be registered on the constructor of ContactDAO (do not confuse with JEPLResultSetDAOListener, is DAL not DAO).

Finally there is a simple DAO sentence for insertion without explicit SQL code and parameters, the JEPLDAO<T>.insert(T obj) method uses under the hood the JEPLUpdateDAOListener<T> registered, this listener provides the column mapping and values to insert.



Alternatively we can use raw SQL code with named or numbered parameters, the code is self explanatory:




Updating persistent objects

Now the code to update:

Nothing to explain, very similar to insertion, again jOOQ in action. In this case we call executeUpdate() returning the number of "rows" involved (one in this case).

Similar to insertion we can use the simple DAO sentence for updating without explicit SQL code and parameters, the JEPLDAO<T>.update(T obj) method uses under the hood the JEPLUpdateDAOListener<T> registered.



Deleting persistent objects

The code to delete one row:


Again the 0 literal value is not used and a ? is generated instead. The call executeUpdate()returns the number of "rows" involved (one in this case).

The same to insertion and update we can use the simple DAO sentence for deleting without explicit SQL code and parameters, the JEPLDAO<T>.delete(T obj) method uses under the hood the JEPLUpdateDAOListener<T> registered.



Query and processing alive/active results

The DAO method getJEPLResultSetDAO():


only can be called inside a Connection got from DataSource, we cannot directly call the same as we execute executeUpdate() because JEPLResultSetDAO holds an active JDBC ResultSet.So we need to wrap the call and data extraction using a JEPLTask.



One shot query

If you know the number of resulting rows or you just want to load an affordable subset of rows, there is no need of using a JEPLResultSetDAO. Instead we call getResultSet() which returns a conventional List<T> (remember you can optionally register a JEPLResultSetDALListener and a mapping listener calling addJEPLListener() before getResultSet()).



The method setMaxResults() is used in this example to limit the number of results.

One shot query, alternative

The method setMaxResults()is enough but to show how much configuration is possible, we are going to show an alternative to do the same registering a JEPLPreparedStatementListener listener to customize the PrepatedStatement used under the hood (we have seen before the same kind of customization of ResultSet). By the way do not worry about threading, a PreparedStatement is bound to a Connection and only one thread can hold a Connection.


Counting rows

Because we expect just one row and a single field, there is a specific method getOneRowFromSingleField().


Select a range

Most of your queries need a range of the results based on a search criteria and order. This is why setFirstResult() and setMaxResults() exist.



Select a range, alternative
If you are an obsessed for control, you can alternatively control how the range is got through JDBC level methods.


Data Access Layer (DAL) level queries

Frequently you want to execute queries returning diverse data beyond model objects, for instance we need the number of columns and the average value of a column of a table in a single query. JEPLayer provides two approaches, using an alive java.sql.ResultSet wrapped by the interface JEPLResultSet and by using a cached result set with the interface JEPLCachedResultSet.

DAL Active Queries

When an alive java.sql.ResultSet wrapped by the interface JEPLResultSet is returned, it is similar to JEPLResultSetDAO, in this case diverse data is returned instead data model objects. Because result iteration requires an alive connection a task is required.



DAL Not Active Queries

When returned a JEPLCachedResultSet, it is similar to the POJO List returned by JEPLDAOQuery<T>.getResultList(), again diverse data is returned instead data model objects. No task is required because a connection is not required for iterating the result, everything is cached into JEPLCachedResultSet.



Transactions, transactions, transactions...

One of the most important features of RDBMS is transactions, more specifically, the implied rollback capability of transactions. The everything or nothing persistence is one of the most important features when evaluating the confidence of a IT system.

JEPLayer is conscious of how important is transactions and how much tedious and error prone is the typical manual demarcation of other DB APIs (I am not talking about jOOQ, which follows a similar approach to JEPLayer). This is why transaction demarcation (begin/commit) is defined by a single user method (nested transactions are possible) and commit is ever implicit when the method normally ends or rollback if some exception is thrown. Later we are going to see how to optionally we can manually demarcate transactions.

JEPLayer provides support to JDBC and JTA transactions, this tutorial only shows JDBC transactions (controlled by the auto-commit JDBC mode). When auto-commit is set to true (the default) every SQL sentence is executed into a built-in transaction according to the guaranties typical of an ACID system. Our interest is when auto-commit is set to false and we need to change several rows by several sentences inside a transaction.

The following examples are ever rollback examples because rollbacking our changes is how we can evaluate if our sentences have been executed into a transactions.

The simplest transaction

In JEPLayer code executed into a transaction is ever wrapped by the only method (exec()) of a JEPLTask. By setting the autoCommit parameter to false we ensure JEPLayer executes the task into a transaction and execute a commit just in the end of the task (or rollback when an exception is thrown).


Transaction by configuring the connection

By using a JEPLConnectionListener we can set auto-commit false in the Connectiongoing to be used in the transaction. JEPLayer executes the task into a transaction.


Transaction by configuring the connection (2)

With a JEPLConnectionListener we can make much more complex things and manual transaction control.

Previous example can be coded in a generic way hiding the JDBC Connection object usingJEPLTransaction instead.


Transaction by annotation

Finally we can specify a task is going to be executed into a transaction specifying @JEPLTransactionalNonJTA.


Epilogue

We have seen how to mix JEPLayer, Java 8 and jOOQ to code true POJO based persistent applications with less verbosity thanks to Java 8 and code less error-prone by using jOOQ.

ENJOY!!

Thursday, February 19, 2015

No longer virgin, uploaded my first jar to Maven Central, and it was not nice

First of all, I must recognize I never liked Maven build system. In spite of its verbosity and limitations Ant was a great invent, a build tool created for developers, a mix between declarative and imperative approaches. NetBeans made a great work integrating Ant as its primary build system.

The need of automatic downloading artifacts (mainly jars) broke everything, the Ivy system seemed the obvious step forward, but people liked more the "false" simplicity of Maven.

I'm a developer, a developer needs to play with APIs, and a nice environment to compose those APIs and create new custom APIs. In the same time a build system must be structurally declarative, must have "join points" to help IDEs to understand the project structure and provide UI access to build tasks. NetBeans made a good job in this case creating a "standard" Ant structure and in the same time leaving room for developer customization.

But unfortunately Maven won...

Maven won the "new generation" of build tools going to replace Ant.

Maven is a friend of IDEs but a terrible tool for developers. The need of making separated plugins IN JAVA to customize your builds is the worst nightmare for any pragmatic developer who loves the tools that promote freedom.

Yes Maven has provided standardization, but a Stalinist standardization. A signal of this contempt to the developer freedom was written in Maven web site when talking about Ant support in Maven, instead of embracing Ant as a basic extension to avoid to reinvent the wheel, Ant is despised in Maven documentation (in spite of supported).

I recognize automatic jar downloading is nice but the price paid has been very high for years.

There're many anti-Maven articles (of course there're also many Maven lovers), this is just one:

http://nealford.com/memeagora/2013/01/22/why_everyone_eventually_hates_maven.html

"Which is why every project eventually hates Maven. Maven is a classic contextual tool: it is opinionated, rigid, generic, and dogmatic, which is exactly what is needed at the beginning of a project. Before anything exists, it’s nice for something to impose a structure, and to make it trivial to add behavior via plug-ins and other pre-built niceties. But over time, the project becomes less generic and more like a real, messy project. Early on, when no one knows enough to have opinions about things like lifecycle, a rigid system is good. Over time, though, project complexity requires developers to spawn opinions, and tools like Maven don’t care."

Maven Stalinist approach is one reason of why very important JVM players in tool development never have used Maven as the primary development option in spite of the pressure of Maven "success". Players like Liferay, Vaadin, GWT, Groovy/Grails, Android and other names I don't remember. Most of them are now replacing their custom build system to adopt Gradle bypassing Maven.

I'm an open source serial developer, I've done many developer tools (http://www.innowhere.com for a list), I love coding and I love the helpful things that IDEs provide to developers like structural code search (References/Find Usages) and refactoring, I try to be a friend of IDEs, this interested friendship pays off in productivity. For me IDE good integration of the language/framework is not an option, and as a single cowboy developer I have no need of continuous integration stuff.

The boring part of software development created for public use is releasing and documentation, I must do it in spite of I don't like it very much, this is why I want easy and developer friendly tools.

In an Ant world packaging was not a problem, I like to provide a simple zip with jars, javadoc, manual in PDF or HTML. This "simple" task is affordable in Ant... near impossible in Maven, the lack of freedom of Maven in this area is so terrible that I still use Ant (maybe in a future I might migrate to Gradle).

Yes, you can avoid Maven, you can use the build system of your preferred IDE or if you are lucky doing Android development you can use Gradle and Android Studio or IntelliJ (unfortunately Gradle support is not so great in other not IntelliJ based IDEs). I don't know very much of Gradle, but I have a very clear intuition, I'll be able of doing ANYTHING with Gradle!

The problems start when you feel pressed of publishing of your artifacts to Maven Central. Maven Central requirements are very Maven tool centric in spite of you only want to upload a simple jar. If you can't follow the Gradle path (because bad support of your IDE) you are forced to follow the Maven way of life...

Maven again has... won.

I think for myself "you can do it guy, everybody does, cannot be so complex, yes you're a bit dump but not so dump".

I'm not new following corporate steps to bring something to public, in my current job I've uploaded many Android applications and many releases of these Android applications. Just need to create a key pair using Java, only once, sign you .apk file with a tool provided by Android SDK, some required zip alignment using an Android command, and upload it using the Google Play web UI. The Android documentation of publishing process is clear and simple.

I have absolutely no problem of using a web UI when I must to do something from time to time, I have no need of automating absolutely everything (for instance and I don't like to put keys in source code or configuration files).

Plenty of self-conviction I start to read Sonatype guide, that is "the guide" . Minutes later my mind starts to break "I can't believe how complex is to upload a fucking signed jar and fill in some place three params (groupId, artifactId and version)". You're forced to create a POM, in practice you are forced to use Maven to publish your jar.

Maven again has... won.

One thing that disturbs me is the need of installing GnuPG, "no, no, no, I can't believe a JVM tool relying on an external tool".

Instead of following the official path of Sonatype I try to find a short path (remember I want to publish free stuff)... bad luck, I find pages like this
(relying to Sonatype docs) and this, this last article makes me even more angry because beyond GngPG I must install, FOR A JAVA TOOL, an external program named Rultor, installed by using the Ruby gem installer... WTF!!

I say to myself "be patient, there is no short path, follow the official path", but when I read things like this:

"If, for some reason (for example, license issue or it's a Scala project), you can not provide -sources.jar or -javadoc.jar , please make fake -sources.jar or -javadoc.jar with simple README inside to pass the checking. We do not want to disable the rules because some people tend to skip it if they have an option and we want to keep the quality of the user experience as high as possible."

That is, you must provide a sources and javadoc jars even there is nothing inside to keep the quality!!!

Ok Maven, you won, according to Sonatype requirements I make a POM including sources and javadoc generation.

In the process of javadoc plugin configuration, I can't understand why folder filtering does not work, soon I realize Maven does not clean previous generated markup, now I understand why Maven Clean is your best friend (and a signal of tool mediocrity).

Time to read next steps for Maven... finally I've got tired of reading how complex the POM becomes... the serious problem of Maven is the excessive declarative approach as explained before, which makes over-complex any attempt to code any sequential task.

Fortunately when searching for how-to articles and commenting in Twitter, @jbaruch an employee of JFrog contacts with me offering Bintray.com alternative to publish to Maven Central, the people behind JCenter, I read the article "The easy way to maven central" and I was sold. Bintray provides a GUI to upload and self sign your artifacts if you provides your public and private GnuPG keys, and with a simple UI action you are published in JCenter repository, and providing your Sontaype user and password you can finally easily publish in Maven Central.

Bintray helped me to break the wall of Sonatype process. I'm saved!!

Currently I've released RelProxy on JCenter and Maven Central. For releasing I use Ant calling to Maven tasks to generate the required Maven artifacts and to generate a distribution zip with everything. Everything could be automated, I could add signing and uploading from Ant (or maybe by the POM) without Bintray, but Bintray auto-signing and uploading UI is enough for me, releasing is done from time to time and most of releasing process is already automated, and releasing in JCenter is a plus.

Note: Don't forget JCenter, for instance Maven Central is no longer pre-configured in Google Android environment.

Enjoy


Wednesday, February 18, 2015

You are welcomed

Some days ago my dear colleague & friend Jerónimo López (@jerolba) invited me to write in a blog my experience publishing my first artifact in Maven Central. 

I have many years doing software developer, I'm very active in social networks, currently on Twitter, time before in Java online magazines like http://javahispano.orghttp://theserverside.comhttp://javalobby.org, writing articles and comments usually with the nickname jmarranz or with my complete name.

Excluding Twitter (@jmarranz), most of my public exposition is (boring) technical stuff, most of my articles talk about the open source tools I've been making for years, links to them can be easily found in web sites of my projects.   

I've never had the need of creating a blog because my interest has ever been promote my tools to the big world, and Java magazines are the best place, because there was a time I was trying to make my own business.

The problem of this approach to make content is there is no single place to locate my blah blah production, even my main website http://www.innowhere.com just contains technical info and no article (only links to them).

It's too late, written articles can be easily found through the pages I control (http://innowhere.com, http://itsnat.org, repositories in GitHub and Google Code etc). I'm going to continue doing the same, maybe the difference now is to write first in my "blog" and later in a popular online magazine, not sure, in this case I must obey to Jerónimo, this is the main reason of creating this blog (yes Jero you are guilty).

Blog? What blog platform? Medium? WordPress? Blogger? Tumblr? Google+? 

I recognize I'm a Google guy, yes, maybe Google is not the best place to work in the world, but I like its impressive technology, I love their tools, I love their tech people, I like the Google way of making things, oh yes I know Google is a company going to make money with your data, I'm aware.

So I'm using Google Blogger, is not the coolest blog platform, but I'm not a cool guy, I'm just a engineer, a programmer, a software developer, a code monkey, no I'm not going to use the word "architect", I hate this "position", the brick analogy has made much pain to software industry and specially software developers.

By the way, the default colors of the first theme of Blogger are very similar to my "corporate" colors, only missing the green color :) 

You are welcomed.