Conversation
Notices
-
@knuthollund@quitter.no You surely noticed that some of my posts are duplicated. As I see logs at a client side, the quitter . no server returned "HTTP 500" error on a first attempt. Maybe this is due to "Use legacy HTTP protocol" set to "Auto" at my side now...
You may test this using the latest !andstatus commit at GitHub
@moshpirit@quitter.es @mmn@social.umeahackerspace.se @lnxw48@fresh.federati.net
- AndStatus repeated this.
-
@moshpirit Good question. It shouldn't link to a group unless using ! :)
-
@moshpirit !andstatus Maybe this is related? Does the client send a Content-Length header with the data? It shouldn't be a problem adding that anyway. https://social.umeahackerspace.se/url/28848
-
@moshpirit@quitter.es @mmn@social.umeahackerspace.se This is about the issue we are discussing here: https://quitter.no/notice/341434
I hope that @knuthollund@quitter.no will figure out a solution collaborating with @hannes2peer@quitter.se, and then we will be able to recommend the same for quitter.es
-
@hannes2peer@quitter.se Of course most of problems may be fixed from different sides (client or server). However, solving the problem by configuration/library updates is much easier than redoing client application.
Regarding Mustard application - it is actually an example of a non-maintainable code resulted from numerous patches (probably to fix issues like this one). I will try to solve this issue reusing available library, not by a patch into current apache.httpclient code...
@moshpirit@quitter.es @mmn@social.umeahackerspace.se
-
@knuthollund @andstatus There's this setting in FastCGI that restricts the request size (which is yet another setting apart from PHP's own max_upload_size and max_post_size). Is that related perhaps? When running Apache and mod_fcgid at least, the setting is: FcgidMaxRequestLen - probably something similar for mod_fastcgi.
-
@mmn @knuthollund The default is something tiny like 128KiB: https://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html#fcgidmaxrequestlen
But that should also affect uploads from the web frontend of course.
-
@mmn @knuthollund I can't find a similar setting for mod_fastcgi though so maybe it was just a shot in the dark. I will try !AndStatus to #quitter.es and see what happens.
-
So, as I mentioned earlier in the thread, if @andstatus@loadaverage.org adds the Content-Length header to the request then everything is alright? Seems very much like this is not a !gnusocial issue.
-
@knuthollund "Length required" sounds like it requires a content-length header. There is no problem for @AndStatus to send the Content-Length since notice uploads with attachments are extremely predictable. Right?
-
@knuthollund Isn't the chunked stuff what a _server_ is allowed to send? I didn't know clients could require the server to request more chunks. Not sure how that'd work in the server-client model even.
-
@andstatus @knuthollund Either way the problem is not in GNU social and asking everyone who use mod_fastcgi/whatever to _change webserver software_ to nginx is unfeasible (I run lighttpd on #quitter.es). Has the problem tried to be mitigated by sending the Content Length header yet?
We can't fix anything for this in !gnusocial code as far as I can tell.
-
@mmn@social.umeahackerspace.se @hannes2peer@quitter.se As I see you didn't notice that according to both error @knuthollund@quitter.no saw in server logs and to the link to nginx documentation I sent today "411 content length" is _not_ a real problem - this is just an HTTP error chosen by developers to report to a User about another problem :-)
Please look at this info: http://wiki.nginx.org/HttpChunkinModule - it shows that 411 error meant unsupported encoding. And it also tells that the encoding _is_ supported since some library version... "HTTP 1.1 chunked-encoding request body support for Nginx".
@moshpirit@quitter.es @andstatus@loadaverage.org @lnxw48@fresh.federati.net
-
@andstatus nginx has nothing to do with this. It is just one out of several webservers in a very varied eco system. I am using lighttpd instead on #quitter.es, Apache with mod_fastcgi is used on #quitter.no. #quitter.se works because it is does not use CGI.
I do not have any experience whatsoever with Java development, so I have no motivation to setup a debug environment and build #AndStatus with completely unexperienced code editing.
This question remains: Have you tried sending the Content-Length header to the servers (#quitter.no, #quitter.es) which are returning "411 Length required"?:
-
@mmn @andstatus ...and even the #nginx "solution" looks like a dirty hack. "on error, go to a page that doesn't treat it as an error!".
HTTP/1.1 describes "411 Length Required" as "The request did not specify the length of its content, which is required by the requested resource". If the resource requires it - the solution is not to add a hack on the resource side but instead to supply the missing information. I.e. the "Content-Length" header. Which should be no problem for a predictable data source such as a notice + file(s) as attachment.
-
@andstatus Maybe Twitter - a single server environment - has implemented this hack too. But we can't all do it. And we're not all going to switch to nginx.
So I guess you haven't tried sending the Content-Length yet. I'll ignore this thread until I know it has been tested and failed. You're welcome to send me the .apk of a test build which sends that header if you still experience problems (which likely won't be "411" errors).
-
@andstatus Different webservers and configs.
Quitter.se runs Apache with mod_php - that does _not_ use CGI.
LoadAverage.org runs nginx v1.6.2, maybe he has even applied the config fix?
I don't know about @vinilox@status.vinilox.eu but I would guess he runs a similar config to #quitter.se
The nodes where "411 Length required" come up run some form of fastcgi (php5-fpm uses the fastcgi interprocess protocol) _without_ special configurations (I call bypassing error messages special). These are at least quitter.es, quitter.no and social.umeahackerspace.se for example. My nodes run lighttpd and php5-fpm, quitter.no run Apache.
Different webservers behave differently, despite there being a specification, and so far the only one which has shown to work without throwing an error is an nginx configuration with a special, non-default setting which @knuthollund could _not_ replicate using standard Debian (version "stable") packages.
-
@knuthollund You can see which webserver software they use (but not how PHP is loaded, only which version) by running 'curl -I domain.com' :)
-
@andstatus Sorry, I was going to ignore this thread until sending the Content-Length header from AndStatus has been attempted.
-
As I understand now, lack of chunked encoding support is a real cause of this incompatibility. Chunked encoding is used exactly when content length is unknown beforehand.
As suggested here: http://stackoverflow.com/questions/7721554/httpclient-disabling-chunked-encoding , switching to HTTP 1.0 protocol reliably turns this feature off, and this will ensure compatibility with different legacy/"simple" server implementations.
I will test this and will add corresponding compatibility option to the Social network editor ("Manage Microblogging systems" now).
PS: I don't set content length in AndStatus code. Nowhere. It's done or not by underlying apache.httpclient or java.net libraries.
@moshpirit@quitter.es @andstatus@loadaverage.org @hannes2peer@quitter.se @mmn@social.umeahackerspace.se @lnxw48@fresh.federati.net @knuthollund@quitter.no
-
@andstatus@loadaverage.org It could perhaps also be possible according to various internet sources to just supply a content-length of 0 and still use chunked transfer. Untested though. But if your library (apache.httpclient?) doesn't let you, I understand your problem.
-
I succeeded in posting a message with an attachment to Quitter.no.
This required two changes:
1. Set apache HttpPost request to HTTP 1.0 protocol:
HttpPost request;
...
request.setProtocolVersion(HttpVersion.HTTP_1_0);
When creating Multipart post request I'm providing as an input for an attachment not a InputStream (as for HTTP 1.1, which causes chunked encoding), but an array of bytes, which I have to create from the same stream beforehand. This is why content length is known.
BTW, I still don't set content length header anywhere in AndStatus code. This is done at HTTP library level.
The only thing left is to add the "USE legacy HTTP protocol" option to a "Social network".
@moshpirit@quitter.es @andstatus@loadaverage.org @hannes2peer@quitter.se @mmn@social.umeahackerspace.se @lnxw48@fresh.federati.net @knuthollund@quitter.no
-
Why not just an automatic fallback? If you get the 411 error, you send it with HTTP/1 @andstatus@loadaverage.org
-
If the _client_ users have to track the _server's_ behaviour it will just cause confusion. The client app should automatically fall back to whatever works so when the server fixes its problem all the client users don't have to configure a setting which they don't even understand the meaning of. @andstatus@loadaverage.org
-
@mmn@social.umeahackerspace.se As we discussed already regarding "SSL Mode" in AndStatus, indeed, it would be good for a client application to adapt to a server automatically.
But as for an "SSL Mode", for the second option: "Use legacy HTTP protocol" - automatic discovery _during normal operation_ of a proper connection option is not practical. Because this will mean substantial time delays and network traffic increase. E.g. in this "411-content-length-required" case, for each attempt to send a message with an attachment, AndStatus will first do failed attempt and succeed on a second attempt only.
I think we better integrate such server parameters discovery into Account creation / verification. Which may be repeated on User's explicit request to "Reverify account".
?!
@moshpirit quitter.se @andstatus@quitter.no @lnxw48@fresh.federati.net @knuthollund@quitter.no @mcscx@quitter.se
-
Do you have an already compiled APK? I don't feel like setting up a build environment that I've never used before just to get a test build. .)