tag:blogger.com,1999:blog-58037248600591559832024-03-13T02:45:21.952-07:00Playing with monoMono@linux tricks, performance metrics, web servers reviews.Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.comBlogger25125tag:blogger.com,1999:blog-5803724860059155983.post-73292528893266785642015-10-26T03:37:00.000-07:002015-10-26T03:37:41.602-07:00Enable refactoring in Monodevelop 5.9.6 on Linux<div dir="ltr" style="text-align: left;" trbidi="on">
<br /></div>
<p>After some changes in monodevelop refactoring stopped to work on Linux. After some investigation I found a way to enable it back.</p>
<h2>For Monodevelop v 5.11 (compiled from sources)</h2>
<p>
Right click on the solution, remove check near "Enable refactoring" menu item (it is located under "Options" menu item) and then set
the check again. After these manipulations refactoring starts to work.
</p>
<h2>For Monodevelop v 5.9.6 (installed from Xamarin repo)</h2>
<p>
Unfortunately, "Enable refactoring" checkbox is available only in the latest
Monodevelop and does not exist in Monodevelop 5.9.6, but it can be enabled
manually.
</p>
<p>Close your solution in Monodevelop. Go to your solution folder, open <YourSolution>.userprefs file then add the
attribute <b>RefactoringSettings.EnableRefactorings="True"</b> to the first XML element "Properties" the file.</p>
<p>It should look like this
<b><Properties StartupItem="...." RefactoringSettings.EnableRefactorings="True"></b></p>
<p>Then save the file and open solution in monodevelop again. Refactoring is working now</p>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-50045157976849840522015-02-11T23:56:00.000-08:002015-02-12T00:04:02.572-08:00Increase NUnit performance on AppVeyor<p>I think you're familiar with AppVeyor - continuous integration cloud system, which allows to make your builds and run unit-tests on windows-based VM. It's free for open-source projects and very useful when you need to be sure that your program will be compilable and runnable against Windows platform (for Linux there is Travis.CI or docker.io, for OS X you can use Travis.CI again).</p>
<p>When I converted MSTest to NUnit for <a href="https://github.com/Microsoft/bond">Microsoft Bond</a> project I have found that nunit tests run 4x-5x times slower than similar MS Tests on appveyor. That was strange because my measurements show that slowness was directly inside the tested methods, not in the nunit framework itself. I started to investigate the issue and found that there is NUnitLite nuget package which do the same things as nunit-console but much faster. What was bad that before using NUnitLite you have to create new console project and reference NUnitLite nuget package. That is not always possible if you don't want to add to your sources some unrelated to your project staff.</p>
<p>So at first I made an AppVeyor script which builds NUnitLite</p>
<pre>
install_script:
- nuget install NUnitLite -version 3.0.0-alpha-5 -pre
- mkdir nunit
- copy NUnitLite.3.0.0-alpha-5\lib\net45\nunitlite.dll nunit
- copy NUnit.3.0.0-alpha-5\lib\net45\nunit.framework.dll nunit
- csc /platform:anycpu32bitpreferred /out:nunit\nulite.exe /optimize+ NUnitLite.3.0.0-alpha-5\content\Program.cs /r:nunit\nunitlite.dll /r:nunit\nunit.framework.dll
</pre>
<p>To test with NUnitLite, you can use two ways.</p>
<pre>
test_script:
#use this way if you installed NUnitLite version greater than 3.0.0-alpha-5
- nunit\nulite.exe path\to\your\tests\TestAssembly.dll
#use this way if you installed NUnitLite version 3.0.0-alpha-5 or lower
- copy nunit\* path\to\your\tests
- path\to\your\tests\nulite.exe TestAssembly
</pre>
<p>Please note that in the second case we start nulite.exe in the folder where your tests are located and pass an assembly name (without extension) as nulite.exe argument</p>
<p>What was interested that without <b>/platform:anycpu32bitpreferred</b> argument nulite.exe works slow on AppVeyor. This argument says to execute generated exe file in x86 mode on the systems which support 32bit. I tried to run nunit-console in x86 mode and this increased speed a lot! To do it just specify <b>--x86</b> argument for nunit-console version 3.0 or run <b>nunit-console-x86</b> for nunit 2.6.3 (which is preinstalled by default on AppVeyor)</p>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com1tag:blogger.com,1999:blog-5803724860059155983.post-29523966900261068252015-02-03T15:14:00.003-08:002015-02-03T15:15:41.078-08:00CoreCLR is on github! How to build it on linux.<p>Today Microsoft announced that CoreCLR (cross-platform runtime for .NET Core) is on github. At first I tried to build it on my Ubuntu 14.04 box. There was some prerequisites issues which prevented build to be completed, but after some investigation I've found the working set of packages:
This is the final script:
<pre>
#Installing Prerequisites
sudo apt-get install git cmake clang-3.5 make llvm-3.5 gcc
#build.sh is working only on 64 bit Linux!
git clone https://github.com/dotnet/coreclr
cd coreclr
./build.sh
</pre>Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-50086067968016409142015-01-23T04:42:00.001-08:002015-01-23T05:11:08.567-08:00Monodevelop and using inherited T4 templates<h1>Introduction</h1>
<p>I am writing <a href="http://github.com/xplicit/BenchmarkSuite">BenchmarkSuite</a> the framework which helps to write benchmark tests in the same way like NUnit unit tests. You write a method which do the benchmark of you code, then mark it with <b>[Bench]</b> attribute and run the bench-console application. It search all the methods marked with <b>[Bench]</b> in the the assembly runs them several times, measures the metrics, calculates Mean, Standard Deviation and other statistical variables and outputs the results to the console and XML file. For testing the usage of the BenchmarkSuite library I've decided to benchmark various binary serializers for some common operations.</p>
<p>I've created <a href="http://github.com/xplicit/SerializersBenchmarks">SerializersBenchmarks</a> project on github, got some well-known and some unknown binary serializers and start to write benchmarks for them. At the start that was funny, I've written a benchmark ran the console and immediately saw its results. But when number of binary serializers became more than 3, and number of test types grew it became a pain to add almost identical lines of code (which differentiate only by name of serializer and method to serialize/deserialize data) to every project. At this stage I decided to automatize the process and use the code generation</p>
<h1>Transforming the code to T4</h1>
<p>Look at the benchmark code</p>
<pre name="code" class="brush:csharp">
[Bench]
[Iterations(10000)]
public void SerializeByteArray64KStream()
{
var ser = SerializationContext.Default.GetSerializer<ByteArray64K> ();
var arr = ByteArray64K.Create();
var b = Benchmark.StartNew ();
using (MemoryStream ms = new MemoryStream ()) {
for (int i = 0; i < 10000; i++) {
ms.Position = 0;
ser.Pack(ms,arr);
}
}
b.Stop ();
}
</pre>
<p>It creates serializer, then creates object of the serialized type and calls 10000 operation of serializing the type to a MemoryStream.</p>
<p>For every serializer and type it differs only few lines of code:
<ul>
<li>function name</li>
<li>creation of serializer</li>
<li>creation of type</li>
<li>calling method to serialize the type to a stream</li>
</ul>
</p>
<p>To reuse common code I decided to write one base T4 precompiled template, derive from it in every benchmarking project and customize derived template for serializer needs. I used <a href="https://msdn.microsoft.com/en-us/library/ee844259.aspx">"Inheritance Pattern: Text in Base Body"</a> from MSDN library to create base template.The code for template<p>
<pre name="code" class="brush:csharp">
<#
foreach (BenchTypeInfo typeInfo in SerializedTypes) {
#>
[Bench]
[Iterations(<#=typeInfo.Iterations#>)]
public void Serialize<#=typeInfo.Name#>Stream()
{
<# InstantiateSerializer("ser",typeInfo.Name); #>
var arr = <#=typeInfo.Name#>.Create();
var b = Benchmark.StartNew ();
using (MemoryStream ms = new MemoryStream ()) {
for (int i = 0; i < <#=typeInfo.Iterations#>; i++) {
ms.Position = 0;
<# Serialize("ser","arr",typeInfo.Name,"ms"); #>
}
}
b.Stop ();
}
<#
}
#>
<#+
public virtual BenchTypeInfo[] SerializedTypes {
get {
return new BenchTypeInfo[] {
new BenchTypeInfo(typeof(ByteArray64K),10000),
new BenchTypeInfo(typeof(PrimitiveType),1000000)
};
}
}
public virtual void InstantiateSerializer(string name, string type){}
public virtual void Serialize(string serName, string objName,
string objType, string streamName){}
#>
</pre>
<p>What do this template do? For every type added in <b>SerializedType</b> array it creates text of the Serializing function, which serializes type to the memory stream. In the placeholders where Serializer should be created it calls the virtual method <b>InstantiateSerializer()</b>, which must be overrided in derived class and must write the code text which instantiates serializer. Then it calls virtual method <b>Serialize()</b> to fill the placeholder with Serialization code.</p>
<p>The next step was to create derived template, which would fill up all placeholders in base template. This was a little tricky. At the first, we must to say that template is ihnerited from base template with <br />
<br />
<b><#@ template language="C#" inherits="BenchArrayBase" #></b><br />
<br />
Then we need to reference the base template assembly and base template namespaces in derived templates. To do it there are commands <br />
<br />
<b><#@assembly name="absolute_path_to_assembly" #></b><br />
<b><#@import namespace="namespace_name" #></b><br />
</p>
<p>Unfortunately, in most cases you don't know the absolute path to the referenced assembly, because you can place your project everywhere. Putting assemblies into GAC is not the good option to avoid this issue. But there is a solution. You can use project macros like <b>${ProjectDir}</b> or <b>${TargetDir}</b> in the assembly name and they will be evaluated into the absolute path. I added project 'SerializersBenchmarks' which contains base template as a reference to benchmarking projects and used <br />
<br />
<b><#@assembly name="${TargetDir}/SerializersBenchmarks.dll" #></b><br />
<br />
as a reference in T4 template. If your base template located in the same assembly as the derived template you can use <br />
<br />
<b><#@assembly name="${TargetPath}" #></b><br />
<br />
construction.</p>
<p>So I've added these lines at the top of the template</p>
<pre name="code" class="brush:csharp">
<#@ template language="C#" inherits="BenchArrayBase" #>
<#@ assembly name="${TargetDir}/SerializersBenchmarks.dll" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ import namespace="SerializersBenchmarks.Templates" #>
<# base.TransformText(); #>
</pre>
<p><b>base.TransformText();</b> calls all transformations in the base template and outputs generated code. During the transformation it calls the overrided methods from the derived template, so I have to define these methods at the end of the file</p>
<pre name="code" class="brush:csharp">
<#+
public override void InstantiateSerializer(string name, string type)
{
base.WriteLine("var {0} = SerializationContext.Default.GetSerializer<{1}> ();",name, type);
}
public override void Serialize(string name, string objname, string objtype, string stream)
{
//ser.Serialize(ms,arr);
base.WriteLine("{0}.Pack({1},{2});",name,stream,objname);
}
#>
</pre>
<p>As you guessed <b>base.WriteLine()</b> writes the line into the output which is generated by base template. That's all! Only few lines of code and we can add new serializer to our benchmarks without copy-pasting and replacing bunch of strings.</p>
<p>You can look into the sources in the real project:</p>
<p>
<a href="https://github.com/xplicit/SerializersBenchmarks/blob/master/SerializersBenchmarks/Templates/BenchArrayBase.tt">Base Template</a><br />
<a href="https://github.com/xplicit/SerializersBenchmarks/blob/master/MsgPackBench/Templates/ArraysBench.tt">Derived Template</a><br />
<a href="https://github.com/xplicit/SerializersBenchmarks/blob/master/MsgPackBench/Templates/ArraysBench.cs">Generated CS File</a>
</p>
<h1>Issues with Monodevelop</h1>
<p>When I start to use inherited T4 templates in Monodevelop I've found that Monodevelop did not support them, it threw an exceptions on trying to use derived templates. So I made a patch with fixes for Monodevelop, which was accepted in version 5.8. Also I made a patch which allow to regenerate all T4 templates in the project or solution (very useful when you made some changes in base template and need to update generated code in the whole solution). To do it right click on the Solution and Project and go to "Tools/Generate T4 Templates" option menu. If you read the article when monodevelop 5.8 is not released yet and want to use T4 inheritance, you can use latest dev monodevelop snapshot. To do it, add the Xamarin dev repository to you repos (see the <a href="http://www.monodevelop.com/download/ci-packages/">instruction</a>), and then do the following commands from the command line:
<pre>
sudo apt-get update
sudo apt-get install monodevelop-snapshot-latest
. mono-snapshot monodevelop
monodevelop
</pre>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-42011887601411807132015-01-03T10:38:00.000-08:002015-01-23T03:02:16.966-08:00Monodevelop project macros <p>
In Visual Studio you can use macros like $(SolutionName) in *.csproj file. The full list of macros you can find in <a href="http://msdn.microsoft.com/en-us/library/42x5kfw4.aspx">MSDN</a>. Monodevelop also has got the macros, but the list differs from MSDN. Here is the list of the project macros I've found in Monodevelop sources. It's actual for Monodevelop 5.8, in future version the list can be changed.
<b>
<ul>
<li>ProjectFile</li>
<li>ProjectConfig</li>
<li>ProjectConfigName</li>
<li>ProjectConfigPlat</li>
<li>TargetPath</li>
<li>TargetFile</li>
<li>TargetName</li>
<li>TargetDir</li>
<li>TargetExt</li>
<li>ProjectName</li>
<li>ProjectDir</li>
<li>AuthorName</li>
<li>AuthorEmail</li>
<li>AuthorCopyright</li>
<li>AuthorCompany</li>
<li>AuthorTrademark</li>
<li>SolutionFile</li>
<li>SolutionName</li>
<li>SolutionDir</li>
</ul>
</b>
</p>Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-40221294077606502822014-06-06T01:00:00.000-07:002014-06-06T01:00:55.943-07:00Two words about continuous integration for mono projects<div dir="ltr" style="text-align: left;" trbidi="on">
<br /></div>
<p>Github has a great continuous integration system called Travis.CI. I use it for HyperFastCgi server to check that solution correctly builds after commit and I am going to use it for unit-tests in future. Travis.CI has a very simple configuration syntax, for example HyperFastCgi travis.yml file looks like that</p>
<pre style="background-color: lightgray">
language: c
before_install:
#add badgerpots ppa key
- wget http://badgerports.org/directhex.ppa.asc
- sudo apt-key add directhex.ppa.asc
#add bagderport repository
- sudo apt-get install python-software-properties
- sudo add-apt-repository "deb http://badgerports.org $(lsb_release -sc) main"
- sudo apt-get update
#install mono
- sudo apt-get install mono-devel
script:
- ./autogen.sh --prefix=/usr
- make
- sudo make install
</pre>
<p>Yesterday I've found, that <a href="http://github.com/drone"><b>drone</b></a> (analogue of Travis.CI) made support for GitLab (analogue of Github) two month ago. So now you can run Github-like version control with Travis-like continuous integration for your private projects without using github and travis. I did not try to install drone to my gitlab server yet, but if it works without serious issues it's a really, really cool!</p> Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-47458829380853325982014-06-01T03:01:00.003-07:002014-06-02T02:33:22.794-07:00Running ASP.NET vNext on mono/linux<div id="mypost">
<p>This is a quick starting guide to run "Hello, world" ASP.NET vNext app on mono/linux.</p>
<h2>Installing mono 3.4.1</h2>
<p>At first, you need to compile the latest mono version from sources. Sources are located at http://github.com/mono/mono. You can follow the docs on the main page, but <b>BEWARE of using --prefix=/usr/local as option of autogen.sh file</b>! Before doing it check where is your system mono installed. You can check it with <b>which</b> command.</p>
<pre style="background-color: lightgray;">
$ which mono
/usr/bin/mono
</pre>
<p>If mono is located in <b>/usr/bin</b> (for example Ubuntu holds it there) then you should change prefix to <b>--prefix=/usr</b> otherwise you'll get two different mono installation and could run into the issues "where is the proper library located?".
If you use Ubuntu, you can run <a href="https://gist.github.com/xplicit/6784595">this script</a>. It'll install mono, xsp (mono web server) and monodevelop IDE.</p>
<h2>Installing ASP.NET vNext</h2>
<p>Run the following commands:</p>
<pre style="background-color: lightgray; overflow-x: scroll">
wget https://raw.githubusercontent.com/graemechristie/Home/KvmShellImplementation/kvmsetup.sh
chmod a+x kvmsetup.sh
./kvmsetup.sh
source ~/.kre/kvm/kvm.sh
kvm upgrade
</pre>
<h2>Running "Hello, world!" application</h2>
<pre style="background-color: lightgray; overflow-x: scroll">
git clone https://github.com/davidfowl/HelloWorldVNext
cd HelloWorldVNext
git submodule update --init
kpm restore
cd src/helloworldweb
k web-firefly
</pre>
<p>It will start the web application at localhost:3001. To change host and port edit the file firefly/src/main/Firefly/ServerFactory.cs at line 30. Put there your host and port. No need to compile, just run <b>k web-firefly</b> again</p>
<p>You can also try to run Nowin host with the command <b>k web</b>, but due to the issue with sockets, you can run only ~1000 requests to your web server</p>
</div>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com2tag:blogger.com,1999:blog-5803724860059155983.post-56244903421568279822014-04-29T16:02:00.001-07:002014-04-30T15:11:04.458-07:00Mono unmanaged calls performance<div dir="ltr" style="text-align: left;" trbidi="on"></div>
<p>Let's imagine that you implemented some great algorithm using C# and think about improving performance of it. You might suggest to rewrite some bottleneck part in native C/C++ and call it from managed code using PInvoke. That may look like a good idea because native code is generally faster than managed but even moon has its own dark side. In the case it will be the cost of PInvoke calls to unmanaged functions. In this post you can find speed comparison of various approaches and choose the best fitting to your needs.</p>
<h1>PInvoke call</h1>
<p>For example, let's take a function, which calculates the sum of char codes in the string. Something like this:
(Note: all source code you can find at <a href="https://github.com/xplicit/PInvokePerf">github</a>)</p>
<pre name="code" class="brush: csharp">
public static int ManagedCount(string s)
{
int sum = 0;
for (int j = 0; j < s.Length; j++) {
sum+=(int)s[j];
}
return sum;
}
</pre>
<p>To add some complexity we will pass array of strings and index in the array to the function.</p>
<pre name="code" class="brush: csharp">
public static int ManagedCount(string[] arr,int i)
{
int sum = 0;
for (int j = 0; j < arr [i].Length; j++) {
sum+=(int)arr[i][j];
}
return sum;
}
</pre>
<p>OK, that's the managed function we will work with. Now, translate this function to native code.</p>
<pre name="code" class="brush: c">
int
unmanagedCount (guint16 **arr,int index)
{
int sum=0;
guint16 *str=arr[index];
while(*str)
{
sum+=*str;
str++;
}
return sum;
}
</pre>
<p>string in CLR has two-byte representation, so we use guint16** pointer to access array of strings. Also we have to add some declaration in *.cs file</p>
<pre name="code" class="brush: csharp">
[DllImport ("libperf.so",EntryPoint="unmanagedCount")]
public static extern int UnmanagedCount(
[MarshalAs(UnmanagedType.LPArray, ArraySubType=UnmanagedType.LPWStr)]
string[] arr,
int i
);
</pre>
<p><b>[DllImport]</b> attribute tells which native library to use and the name of the function in the library (<b>EntryPoint</b>), <b>[MarshalAs]</b> attribute says that PInvoke must pass first parameter as array of two-bytes strings</p>
<p>If you're unfamiliar with PInvoke, you should know one thing: at every call PInvoke converts parameters from managed type to unmanaged and then convert it back on return value. The attribute [MarshalAs] of parameters tells CLR how they should be converted. Such conversions consume additional time and affects to performance as well.</p>
<p>Now, we can create array of strings, and call these functions ten million times to check the time execution.</p>
<p><b>
Managed: 3 939 ms</br>
PInvoke: 11 616 ms
</b><p>
<p>You can see, that PInvoke is three times slower than managed function and mostly because of these managed to unmanaged conversions, so you can't improve performance with PInvoke to unmanaged function if it is called very often.</p>
<h1>Internal call</h1>
<p>Mono has a hidden feature which is not well-known yet. It's called Internal Calls. Primary purpose of Internal Calls are mostly provide the way to implement in native code some critical methods of corlib library (memory allocation, copiing of objects, interaction with sockets and so on). Secondary it allows native application which embeds mono calls native functions of the application. With some magic I found a way to use Internal Calls in common mono application without embedding mono or changing corlib assembly.</p>
<!-- <p>Description of how Internal Calls work</p> -->
<p>At first, declare InternalCount method in *.cs file</p>
<pre name="code" class="brush: csharp">
[DllImport ("libperf.so",EntryPoint="internalCount")]
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
public static extern int InternalCount(string[] arr, int i);
</pre>
<p>Difference between platform invoke method declaration and internal calls is that you place the attribute <b>[MethodImpl(MethodImplOptions.InternalCall)]</b> over the method. MethodCodeType field is optional and may be omitted. Also, there is another difference you don't need to specify how parameters will be marshaled, because Internal Calls don't convert parameters of method to unmanaged types and place parameter to the stack as is.</p>
<p>Then we have to write registration function for our internal call. Add the declaration to cs file.</p>
<pre name="code" class="brush: csharp">
[DllImport ("libperf.so",EntryPoint="init")]
public static extern void InitInternals();
</pre>
<p>And add the code to c file</p>
<pre name="code" class="brush: c">
#include <mono/metadata/loader.h>
void
init()
{
mono_add_internal_call (
"PInvokePerf.PerformanceTest::InternalCount(string[],int)",
internalCount
);
}
</pre>
<p>You see that init() function calls <b>mono_add_internal_call</b>. This function is defined in mono runtime, and you have to add header <loader.h>, add to compiler options include search path and link with mono library. To know headers include path, run from command line
<br /><br /><b>pkg-config --cflags mono-2</b>
<br/><br />To find library name and path, run
<br /><br /><b>pkg-config --libs mono-2</b>
<br /><br /><a href="https://github.com/xplicit/PInvokePerf/blob/master/libperf/Makefile">An example of Makefile</a>
<br /><br />
</p>
<p>Function <b>mono_add_internal_call</b> has two parameters: CLI method name (with optional signature) and pointer to a native function, which will be called when CLR calls the declared method. The name of the method is constucted as "Namespace.ClassName::MethodName" and may be optionally added with method signature (this is usefull, when you have got overloaded methods)</p>
<p>Now we are ready to the final part: implementation of <b>internalCount</b> function. Let see at the function body</p>
<pre name="code" class="brush: c">
int
internalCount (MonoArray *arr,int index)
{
MonoString* el = mono_array_get(arr,MonoString *,index);
int len = mono_string_length(el);
gint32 sum=0;
guint16 *str = mono_string_chars(el);
int i;
for(i = 0; i < len; i++)
{
sum += str[i];
}
return sum;
}
</pre>
<p>You may notice that the function has <b>MonoArray</b> type in the signature which represents <b>string[]</b> type in csharp. That is the most important difference versus standard PInvoke: Internal Calls works directly with managed types and you have to use Mono API to access parameters and return values. Header files of Mono API you can find at <b>pkg-config --cflags mono-2</b> directory mentioned above.</p>
<p>Some comments about the code:
<br /><br /><b>MonoString* el = mono_array_get(arr,MonoString *,index);</b>
<br />returns element from array <i>arr</i> with elements of type <i>MonoString *</i> at the <i>index</i> location
<br />
<br /><b>mono_string_length(el)</b>
<br />returns string length
<br />
<br /><b>mono_string_chars(el)</b>
<br />returns pointer to the internal char array of the managed string
</p>
<p>Now all is done and we can run our InternalCount function. When I did it for the first time, I did it like that</p>
<pre name="code" class="brush: csharp">
public static void Main (string[] args)
{
//We must call InitInternals to initialize internal calls
PerformanceTest.InitInternals();
PerformanceTest.InternalCount(arr,0);
}
</pre>
<p>But surprisingly for me it worked as expected only in mono AOT mode, when I run this program in normal mode I got <b>MissingMethodException</b>. I have to spent some time with debugger and found the interesting thing</p>
<p>When mono starts to execute method 'Main' JIT compiler compiles 'Main' at first and recursively all the methods which are called from the 'Main'. As 'Main' is referenced to 'InternalCount' method, JIT starts to compile 'InternalCount' method too. In compilation it searches the method name in registered internal calls, because the method has [MethodImplOption.InternalCall] attribute. But it could not find it, because 'InitInternals' function is not yet run! In this case JIT generates 'throw new MissingMethodException' in IL and all subsequent calls are throwing that exception, even when we register proper internal call later.</p>
<p>To avoid such behaviour I hide the method InternalCount from JIT. To do this I placed all the meaningful code out of the 'Main' function, and in 'Main' function created delegate to my function and called it. JIT compiles delegate only when it starts to run and 'MissingMethodException' goes away! The code now looks like this</p>
<pre name="code" class="brush: csharp">
delegate void HideFromJit();
public static void Main (string[] args)
{
//Create array
InitArray ();
//Register internal calls
PerformanceTest.InitInternals ();
Console.WriteLine ("Performance measuring starting");
//You can call it directly in AOT mode
//Performance ();
HideFromJit d=Performance;
d ();
}
public static void Performance()
{
//All the code is here
for(int i = 0; i < 1000000; i++)
PerformanceTest.InternalCount(arr,i%100);
}
</pre>
<p>Finally performance comparison for these methods</p>
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th>Method<th>Mono no optimizations</th><th>Mono --optimize=unsafe</th></tr>
<tr><td>Managed</td><td>3 939 ms</td><td>2 784 ms</td></tr>
<tr><td>PInvoke</td><td>11 616 ms</td><td>11 804 ms</td></tr>
<tr><td>Internal Call</td><td>872 ms</td><td>855 ms</td></tr>
</table>
</div>
<p>Internal calls is a total winner, when the PInvoke is outsider with no chances to beat even managed code. PInvoke to Internal Call performance differs more to ten times!</p>
<p>And here are the results for byte buffer xoring algorithm</p>
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th>Method<th>Mono no optimizations</th><th>Mono --optimize=unsafe</th></tr>
<tr><td>Managed</td><td>2 068 ms</td><td>1 507 ms</td></tr>
<tr><td>PInvoke</td><td>4 387 ms</td><td>3 707 ms</td></tr>
<tr><td>Internal Call</td><td>1 372 ms</td><td>1 381 ms</td></tr>
</table>
</div>
<p>You can see that with 'unsafe' optimization managed code executes close to Internal Calls, but without optimizations it's 50% slower. I choose 'unsafe' optimization, because it shows maximal speed boost for code working with arrays (unsafe optimization removes bounds checks). PInvoke again at the last place</p>
<p>Opened questions:
<ul>
<li>GC movements. Should we pin managed data or do something another to be sure, that the data is not moving by GC when we are in the internal call?</li>
</ul>
</p>
<h1>Conclusion</h1>
<p>Mono is a powerful framework and allows you to do great things with native code as well as managed. If you want to increase performance of you managed code don't use PInvoke to unmanaged as it defeats performance, but instead you might look onto Internal Calls. But you should be aware that the internal call mechanism is platform depended and you could not run you great app on .NET if you use it. By the way you always can add conditional #ifdef and compile your app with managed method for .NET and internal for Mono</p>
<h1>References</h1>
<p>
<ul>
<li><a href="https://github.com/xplicit/PInvokePerf">Source codes</a></li>
<li><a href="http://www.mono-project.com/Embedding_Mono">Embedding mono</a></li>
<li><a href="http://www.mono-project.com/Interop_with_Native_Libraries">Mono: interop with native libraries</a></li>
</ul>
</p>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-52680235421164275082013-12-20T20:05:00.001-08:002014-01-20T08:38:15.302-08:00Unexpected unloading of mono web application<div dir="ltr" style="text-align: left;" trbidi="on">
<br /></div>
<p>After several bugs in mono gc were fixed, I was able to run benchmarks for aspx page in apache2+mod-mono server. I used mono from master branch, mono --version says: "Mono Runtime Engine version 3.2.7 (master/01b7a50 Sat Dec 14 01:48:49 NOVT 2013)". Crashes with SIGSEGV went away but unfortunately I can't say that serving aspx with apache2 are stable now. Two times during benchmarks I've got something similar to deadlock: mono stopped to process requests and stuck at consuming 100% of CPU. Don't know what was that, my try to debug mono process with GDB did not bring an answer (unlike the other cases when GDB help me to find cause of deadlocks/SIGSEGV or at least the place of suspicious code and send this info to mono team). Also there are memory leaks. And a bad thing exists, that the server stops responding after processing ~160 000 requests, but there is workaround for it.</p>
<h1>Mono .aspx 160K requests limit</h1>
<p>If you run <b>ab -n 200000 http://yoursite/hello.aspx</b> where hello.aspx is a simple aspx page which do nothing, and site is served under apache mod-mono, after ~160K request you'll get deny of service. This error caused by several reasons I'll try to explain, what is going on and how to avoid this</p>
<p>When request comes to aspx page web server creates new session. Than the session saves to internal web cache. When the second request comes, the server tries to read session cookies and, if not found, creates and saves new session to the cache again. So every request without cookies creates new session object in the cache. This could provide huge memory leaks, when the number of sessions grow unstoppable, to prevent this web server has the maximal limit of objects, which internal web cache can store. This limit is defined as constant in Cache.cs and hardcoded to 15000</p>
<p>When the number of objects in internal cache hits 15000, web server starts to aggressively delete all objects from the cache using LRU strategy. So if user got the session 5 minutes ago and works with site by clicking the page every minute his session will be removed from cache (and lost all the data inside the session) in opposite to some hazardous script (without session cookies was set) which gets 15K requests to the page during last minute and creates 15K empty sessions. But this is not all.</p>
<p>Internal cache is also used for storing some important server objects, for example all dynamically compiled assemblies are stored there. And there is no preference for server objects when deleting from cache all objects are equal. So if some server object was not accessed too long it will be removed. And this is the cause of second error</p>
<p>Here the code of GetCompiledAssembly() method. It's called every time, when the page is accessed</p>
<pre name="code" class="brush:csharp">
string vpabsolute = virtualPath.Absolute;
if (is_precompiled) {
Type type = GetPrecompiledType (vpabsolute);
if (type != null)
return type.Assembly;
}
BuildManagerCacheItem bmci = GetCachedItem (vpabsolute);
if (bmci != null)
return bmci.BuiltAssembly;
Build (virtualPath);
bmci = GetCachedItem (vpabsolute);
if (bmci != null)
return bmci.BuiltAssembly;
return null;
</pre>
<p>Let's look. When .aspx page is accessed for the first time it tries to check if it was precompiled. If did it run process method. If not, it tries to find the compiled page in the internal cache and if not found there it compiles the page and stores compiled type into the cache (inside the Build() function). The schema looking good, but not in our case. When the internal cache overgrows 15K limit compiled type is removed from the cache even it was accessed right now! I think there is some bug in LRU implementation or maybe object are got from LRU only once and saved into some temp variable, so LRU object does not update last access time.</p>
<p>You may ask: "So what? Compiled type was deleted from the cache, but won't it be there on the next page get? Algorithm checks existence of the type in the cache, and if not found it compiles it again and places to cache. It could reduce performance, but could not be a reason of denial of service". And you'll be right. This is not exactly the reason of DoS. But if you look inside of page compilation, you'll find that it has a limit of recompilation times. And if this limit is reached it starts to unload AppDomain with the whole application! And at the last mod-mono somehow does not control AppDomain unloading, don't know why it should, but after 160K request the page is stopped responding.</p>
<pre name="code" class="brush:csharp">
try {
BuildInner (vp, cs != null ? cs.Debug : false);
if (entryExists && recursionDepth <= 1)
// We count only update builds - first time a file
// (or a batch) is built doesn't count.
buildCount++;
} finally {
// See http://support.microsoft.com/kb/319947
if (buildCount > cs.NumRecompilesBeforeAppRestart)
HttpRuntime.UnloadAppDomain ();
recursionDepth--;
}
</pre>
<p>How can this be workarounded?
<br />I know only one way - always use precompiled web site. At first look I had a hope, that constants LOW_WATERMARK and HIGH_WATERMARK for cache can be changed by setting appropriate environment variable, but, unfortunately it's not. In my opinion cache usage should be rewritten - user sessions and web server internal objects should have different storage places and must not affect each other. Also session should not be created at first page access, if the page doesn't asks for session object, it can be created later, when it really needed for processing the page</p>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-62927608646642465262013-12-11T06:52:00.001-08:002014-01-20T08:37:52.271-08:00ServiceStack performance on mono part4<div dir="ltr" style="text-align: left;" trbidi="on">
<br /></div>
<p>Today I again tried to increase performance of ServiceStack on the Mono. In the first part I noted that profiler showed large amount of calls and execution time of <b>Hashtable:GetHash()</b>, <b>SimpleCollator:CompareInternal()</b> and <b>Char:ToLower()</b> methods. To understand why these methods works slow I checked the call stack and found that most of the calls are maden from HttpHeadersCollection class. When I looked inside the source and saw that HttpHeadersCollection uses InvariantCultureIgnoreCase string comparison instead of OrdinalIgnoreCase which is more suitable when comparing names of headers (because they do not need be linguistic equivalent) and should be more performant</p>
<p>To be sure of Hashtable and Dictionary performance with various StringComparing options I wrote simple benchmark. It adds 100 000 strings and than tries to get them one by one for every StringComparing options. The original idea of test code I get from <a href="http://www.stephan-brenner.com/?p=69">here</a>. My test is slightly modified.<p>
<pre name="code" class="brush: csharp">
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Collections;
namespace DictPerfomanceTest
{
class ComparerInfo
{
public string Name { get; set;}
public StringComparer Comparer { get; set;}
public ComparerInfo(string name, StringComparer comparer)
{
Name = name;
Comparer = comparer;
}
}
class MainClass
{
const int nCount=100000;
const string prefix = "SomeSomeString";
static readonly ComparerInfo[] Comparers=new ComparerInfo[]
{
new ComparerInfo("CurrentCulture",StringComparer.CurrentCulture),
new ComparerInfo("CurrentCultureIgnoreCase",StringComparer.CurrentCultureIgnoreCase),
new ComparerInfo("InvariantCulture",StringComparer.InvariantCulture),
new ComparerInfo("InvariantCultureIgnoreCase",StringComparer.InvariantCultureIgnoreCase),
new ComparerInfo("Ordinal",StringComparer.Ordinal),
new ComparerInfo("OrdinalIgnoreCase",StringComparer.OrdinalIgnoreCase)
} ;
public static void Main (string[] args)
{
foreach(var ci in Comparers)
{
Console.WriteLine ("Hashtable: {0}", ci.Name);
Run (new Hashtable (ci.Comparer));
}
foreach(var ci in Comparers)
{
Console.WriteLine ("Dictionary: {0}", ci.Name);
Run (new Dictionary<string,string> (ci.Comparer));
}
}
private static void Run(Hashtable hashtable)
{
for(int i = 0; i < nCount; i++)
{
hashtable.Add(prefix+i.ToString(), i.ToString());
}
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < nCount; i++)
{
string a = (string)hashtable[prefix+i.ToString()];
}
sw.Stop();
Console.WriteLine("Time: {0} ms", sw.ElapsedMilliseconds);
}
private static void Run(Dictionary<string, string> dictionary)
{
for(int i = 0; i < nCount; i++)
{
dictionary.Add(prefix+i.ToString(), i.ToString());
}
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < nCount; i++)
{
string a = dictionary[prefix+i.ToString()];
}
sw.Stop();
Console.WriteLine("Time: {0} ms", sw.ElapsedMilliseconds);
}
}
}
</pre>
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th>Comparison Option</th><th>Hashtable time (ms)</th><th>Dictionary time (ms)</th></tr>
<tr><td>CurrentCulture</td><td>19 131</td><td>16 030</td></tr>
<tr><td>CurrentCultureIgnoreCase</td><td>20 458</td><td>16 587</td></tr>
<tr><td>InvariantCulture</td><td>18 359</td><td>15 161</td></tr>
<tr><td>InvariantCultureIgnoreCase</td><td>21 128</td><td>16 192</td></tr>
<tr><td>Ordinal</td><td>58</td><td>46</td></tr>
<tr><td>OrdinalIgnoreCase</td><td>73</td><td>73</td></tr>
</table>
</div>
<p>What can I say? Don't use InvariantCulture or Culture-depended comparison in mono if you don't need it really! In most cases when you use string as dictionary key you can safely use Ordinal or OrdinalIgnoreCase string comparing options. For example names of caching keys in Redis, paths, names of configuration elements in xml are good candidates for Ordinal comparison. By default Dictionary uses Ordinal and Hashtable uses OrdinalIgnoreCase comparison for strings, but don't forget to pass these options to String.Compare(), String.StartWith(), String.EndWith() methods if you want to run you software fast and more predictable</p>
<p>Very good explanation about differencies about InvariantCulture and Ordinal comparison you can read <a href="http://social.msdn.microsoft.com/Forums/vstudio/en-US/9687aa28-bcc7-4221-8099-495f6b907f29/difference-between-invariantculture-and-ordinal-string-comparision">here</a>. In two lines of code it's looking like this:</p>
<pre name="code" class="brush:csharp">
Console.WriteLine(String.Equals("æ", "ae", StringComparison.Ordinal)); // Prints false
Console.WriteLine(String.Equals("æ", "ae", StringComparison.InvariantCulture)); // Prints true
</pre>
<p>I changed HttpHeadersCollection in the <a href="https://github.com/xplicit/mono/commit/583e8e07f660280b76456365ebfc04346dcd35e6">commit</a> and made a pull request to mono. Hope it will be reviewed and approved. Also I am going to change hashing functions for HttpRequest headers, first tests shows 3x to 6x performance improvement of ordinal case insensitive hash function without any changes of hashing algorithm</p>
<p>Links:</p>
<br/><a href="http://forcedtoadmin.blogspot.com/2013/11/servicestack-performance-in-mono-p1.html">ServiceStack performance in mono. Part 1</a>
<br/><a href="http://forcedtoadmin.blogspot.com/2013/11/servicestack-performance-in-mono-p2.html">ServiceStack performance in mono. Part 2</a>
<br/><a href="http://forcedtoadmin.blogspot.com/2013/12/servicestack-performance-in-mono-p3.html">ServiceStack performance in mono. Part 3</a>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-55590160164876107652013-12-05T04:07:00.000-08:002014-04-05T04:16:07.789-07:00ServiceStack performance in mono part 3<div dir="ltr" style="text-align: left;" trbidi="on">
<br /></div>
<p>In previous post I benchmarked various HTTP mono backends in linux and found that Nginx+mono-server-fastcgi pair is very slow in comparison with others. There was several times difference in number of served requests per second! So two questions were raised: the first is "Why is so slow?" and second "What can be done to improve performance?". In this post I'll try to answer to both questions</p>
<h1>Why is so slow?</h1>
<p>Let's profile fastcgi mono server. You should remember that profiling can be enabled by setting appropriate MONO_OPTIONS environment variable. If you don't you can read about web servers profiling options in <a href="http://forcedtoadmin.blogspot.ru/2013/11/servicestack-performance-in-mono-p1.html">the first part</a></p>
<p>After running profile I've got the results</p>
<pre style="margin-left:-25px; margin-right: -25px;overflow-x: scroll">
Total(ms) Self(ms) Calls Method name
243637 4 1002 (wrapper remoting-invoke-with-check) Mono.WebServer.FastCgi.ApplicationHost:ProcessRequest (Mono.WebServer.FastCgi.Responder)
140963 4 591 (wrapper runtime-invoke) <Module>:runtime_invoke_void__this___object (object,intptr,intptr,intptr)
140863 60 501 Mono.FastCgi.Server:OnAccept (System.IAsyncResult)
140570 25 501 Mono.FastCgi.Connection:Run ()
129977 3 501 Mono.FastCgi.Request:AddInputData (Mono.FastCgi.Record)
129971 5 501 Mono.FastCgi.ResponderRequest:OnInputDataReceived (Mono.FastCgi.Request,Mono.FastCgi.DataReceivedArgs)
129964 0 501 Mono.FastCgi.ResponderRequest:Worker (object)
129963 1 501 Mono.WebServer.FastCgi.Responder:Process ()
129959 34 501 (wrapper xdomain-invoke) Mono.WebServer.FastCgi.ApplicationHost:ProcessRequest (Mono.WebServer.FastCgi.Responder)
122777 3 501 (wrapper xdomain-dispatch) Mono.WebServer.FastCgi.ApplicationHost:ProcessRequest (object,byte[]&,byte[]&)
113673 3 501 Mono.WebServer.FastCgi.ApplicationHost:ProcessRequest (Mono.WebServer.FastCgi.Responder)
112227 14 501 Mono.WebServer.BaseApplicationHost:ProcessRequest (Mono.WebServer.MonoWorkerRequest)
112205 2 501 Mono.WebServer.MonoWorkerRequest:ProcessRequest ()
111942 2 501 System.Web.HttpRuntime:ProcessRequest (System.Web.HttpWorkerRequest)
111761 3 501 System.Web.HttpRuntime:RealProcessRequest (object)
111745 11 501 System.Web.HttpRuntime:Process (System.Web.HttpWorkerRequest)
110814 7 501 System.Web.HttpApplication:System.Web.IHttpHandler.ProcessRequest (System.Web.HttpContext)
110785 7 501 System.Web.HttpApplication:Start (object)
110148 14 501 System.Web.HttpApplication:Tick ()
110133 346 501 System.Web.HttpApplication/<Pipeline>c__Iterator1:MoveNext ()
73347 92 6012 System.Web.HttpApplication/<RunHooks>c__Iterator0:MoveNext ()
64025 32 501 System.Web.Security.FormsAuthenticationModule:OnAuthenticateRequest (object,System.EventArgs)
62704 141 21042 Mono.WebServer.FastCgi.WorkerRequest:GetKnownRequestHeader (int)
62550 250 45647 System.Runtime.Serialization.Formatters.Binary.ObjectReader:ReadObject (System.Runtime.Serialization.Formatters.Binary.BinaryElement,System.IO.BinaryReader,long&,object&,System.Runtime.Serialization.SerializationInfo&)
62273 5 1002 System.Web.HttpRequest:get_Cookies ()
62203 134 20040 Mono.WebServer.FastCgi.WorkerRequest:GetUnknownRequestHeaders ()
56381 6 1002 (wrapper remoting-invoke-with-check) Mono.WebServer.FastCgi.Responder:GetParameters ()
56373 34 501 (wrapper xdomain-invoke) Mono.WebServer.FastCgi.Responder:GetParameters ()
54634 368 44653 System.Runtime.Serialization.Formatters.Binary.ObjectWriter:WriteObjectInstance (System.IO.BinaryWriter,object,bool)
51554 16 1514 System.Runtime.Serialization.Formatters.Binary.BinaryFormatter:Deserialize (System.IO.Stream)
51537 47 1514 System.Runtime.Serialization.Formatters.Binary.BinaryFormatter:NoCheckDeserialize (System.IO.Stream,System.Runtime.Remoting.Messaging.HeaderHandler)
51531 34 12007 System.Runtime.Remoting.RemotingServices:DeserializeCallData (byte[])
50521 19 1514 System.Runtime.Serialization.Formatters.Binary.ObjectReader:ReadObjectGraph (System.Runtime.Serialization.Formatters.Binary.BinaryElement,System.IO.BinaryReader,bool,object&,System.Runtime.Remoting.Messaging.Header[]&)
48246 46 7536 System.Runtime.Serialization.Formatters.Binary.ObjectReader:ReadNextObject (System.IO.BinaryReader)
47020 999 54096 System.Runtime.Serialization.Formatters.Binary.ObjectReader:ReadValue (System.IO.BinaryReader,object,long,System.Runtime.Serialization.SerializationInfo,System.Type,string,System.Reflection.MemberInfo,int[])
35051 143 22013 System.Runtime.Remoting.RemotingServices:SerializeCallData (object)
34198 7 1516 System.Runtime.Serialization.Formatters.Binary.BinaryFormatter:Serialize (System.IO.Stream,object)
34190 15 1516 System.Runtime.Serialization.Formatters.Binary.BinaryFormatter:Serialize (System.IO.Stream,object,System.Runtime.Remoting.Messaging.Header[])
33354 28 1516 System.Runtime.Serialization.Formatters.Binary.ObjectWriter:WriteObjectGraph (System.IO.BinaryWriter,object,System.Runtime.Remoting.Messaging.Header[])
33253 78 1516 System.Runtime.Serialization.Formatters.Binary.ObjectWriter:WriteQueuedObjects (System.IO.BinaryWriter)
29792 539 16549 System.Runtime.Serialization.Formatters.Binary.ObjectWriter:WriteObject (System.IO.BinaryWriter,long,object)
28486 656 49652 System.Runtime.Serialization.Formatters.Binary.ObjectWriter:WriteValue (System.IO.BinaryWriter,System.Type,object)
26041 101 501 System.Runtime.Serialization.Formatters.Binary.ObjectReader:ReadGenericArray (System.IO.BinaryReader,long&,object&)
24552 16 501 System.Web.HttpApplication:PipelineDone ()
23851 58 501 System.Web.HttpApplication:OutputPage ()
23782 20 501 System.Web.HttpResponse:Flush (bool)
23079 598 16539 System.Runtime.Serialization.Formatters.Binary.ObjectReader:ReadObjectContent (System.IO.BinaryReader,System.Runtime.Serialization.Formatters.Binary.ObjectReader/TypeMetadata,long,object&,System.Runtime.Serialization.SerializationInfo&)
22542 24 501 (wrapper xdomain-dispatch) Mono.WebServer.FastCgi.Responder:GetParameters (object,byte[]&,byte[]&)
19536 39 3030 System.Runtime.Serialization.Formatters.Binary.ObjectWriter:WriteArray (System.IO.BinaryWriter,long,System.Array)
18377 105 501 System.Runtime.Serialization.Formatters.Binary.ObjectWriter:WriteGenericArray (System.IO.BinaryWriter,long,System.Array)
</pre>
<p>In profile you can see there are alot of binary serialization calls which take most of the processing time. But if you look into the mono fastcgi code, you don't find any explicit calls of BinarySerializer. What is going on? I hope you've already guessed what caused such overhead in serialization calling in other case let's look on to the picture:</p>
<div class="separator" style="clear: both; text-align: left;margin-left: -10px"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp4cH0UbGgPh52dHetJoOcR5_eiumegubboFSqf3gdajQhM2Kv9G-5ue7nyw9SeeVOTHzWV4-STigSkkTi0lDlHXX8P7e3v5k6rncTXrAJx2bxY-OCrJ8GYetP00SBPtmmiqm3nIpQeGQP/s1600/fastcgi-arch.png" imageanchor="1" style="margin-left: -1em; margin-right: -1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp4cH0UbGgPh52dHetJoOcR5_eiumegubboFSqf3gdajQhM2Kv9G-5ue7nyw9SeeVOTHzWV4-STigSkkTi0lDlHXX8P7e3v5k6rncTXrAJx2bxY-OCrJ8GYetP00SBPtmmiqm3nIpQeGQP/s1600/fastcgi-arch.png" /></a></div>
<p>New FastCGI request handler is created for every request from Nginx, than request looks for corresponding web application by HTTP_HOST server variable and after application have found creates new HttpWorkerRequest inside of it, and calls Process method to process it. While processing web application communicates with FastCGI request handler (asks for HTTP headers, returns HTTP response and so on). Because FastCGI request handler and web application are located in different domains all calls between them goes through remoting. Remoting calls binary serialization for objects are passed and this makes application slow. I'd rather say remoting makes application VERY VERY VERY SLOW if you pass complex types between endpoints. It's a prime evil of distributed applications which need to be performant. Don't use remoting if you have another choice to communicate between your apps.</p>
<p>OK, we found, that fastcgi server actively uses remoting inside of it and this can reduce performance. But is the remoting only one thing which dramatically reduces the performance? Maybe FastCGI protocol itself is a very slow and we couldn't use fast and reliable mono web server with nginx?</p>
<p>To check this I decided to write simple application based on mono-server-fastcgi source code. The application should instantly return "Hello, world!" http response for every http request without using remoting. If I could write such app and it would be more performant, I would proved that more reliable web server could be created.</p>
<h1>Proof of concept</h1>
<p>I took FastCGI server sources and wrote my own network server based on async sockets. From the old sources I only got FastCGI record parser, all other I rid off. After the simple app has been completed, I made a benchmarks</p>
<p>Before publishing results, let's remember benchmarks of mono-server-fastcgi were maden in previous post. </p>
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th>Configuration</th><th>requests/sec</th><th>Standart deviation</th><th>std dev %</th><th>Comments</th></tr>
<tr><td>Nginx+fastcgi-server+ServiceStack</td><td><b>571.36</b></td><td>8.81</td><td>1.54</td><td>Memory Leaks</td></tr>
<tr><td>Nginx+fastcgi-server hello.html</td><td><b>409.48</b></td><td>9.14</td><td>2.23</td><td>Memory Leaks</td></tr>
<tr><td>Nginx+fastcgi-server hello.aspx</td><td><b>458.55</b></td><td>9.89</td><td>2.16</td><td>Memory Leaks, Crashes</td></tr>
<tr><td>Nginx+proxy xsp4+ServiceStack</td><td><b>1402.33</b></td><td>45.42</td><td>3.24</td><td>Unstable Results, Errors</td></tr>
</table>
</div>
<p>This benchmarks were maden with Apache <b>ab</b> tool using 10 concurrent requests. You can see, that fastcgi mono server performs 400-500 requests per second. In new benchmarks I additionally variate number of concurrent requests to see influence on the results. The command was <br /><b>ab -n 100000 -c <concurency> http://testurl</b></p>
<p>Nginx configuration:</p>
<pre style="background-color: lightgray;">
server {
listen 81;
server_name ssbench3;
access_log /var/log/nginx/ssbench3.log;
location / {
root /var/www/ssbench3/;
index index.html index.htm default.aspx Default.aspx;
fastcgi_index Default.aspx;
fastcgi_pass 127.0.0.1:9000;
include /etc/nginx/fastcgi_params;
}
}
</pre>
<p>Benchmark results:</p>
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th>Nginx fastcgi settings</th><th>Concurency</th><th>Requests/Sec</th><th>Standart deviation</th><th>std dev %</th></tr>
<tr><td>TCP sockets</td><td>10</td><td><b>2619.56</b></td><td>49.95</td><td>1.83</td></tr>
<tr><td>TCP sockets</td><td>20</td><td><b>2673.198</b></td><td>19.43</td><td>0.72</td></tr>
<tr><td>TCP sockets</td><td>30</td><td><b>2681.166</b></td><td>15.83</td><td>0.59</td></tr>
</table>
</div>
<p>Significant difference isn't it? These results give us a hope, that we can increase throughoutput of fastcgi server if we change the architecture and remove remoting communication from it. By the way there is a room to increase performance. Are you ready to go further?</p>
<h1>Faster higher stronger</h1>
<p>Next step I've done I switched connumication between nginx and server from TCP sockets to Unix sockets. Config and results</p>
<pre style="background-color: lightgray;">
server {
listen 81;
server_name ssbench3;
access_log /var/log/nginx/ssbench3.log;
location / {
root /var/www/ssbench3/;
index index.html index.htm default.aspx Default.aspx;
fastcgi_index Default.aspx;
fastcgi_pass unix:/tmp/fastcgi.socket;
include /etc/nginx/fastcgi_params;
}
}
</pre>
<p>Results</p>
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th>Nginx fastcgi settings</th><th>Concurency</th><th>Requests/Sec</th><th>Standart deviation</th><th>std dev %</th></tr>
<tr><td>Unix sockets</td><td>10</td><td><b>2743.622</b></td><td>40.91</td><td>1.49</td></tr>
<tr><td>Unix sockets</td><td>20</td><td><b>2952.244</b></td><td>67.86</td><td>2.29</td></tr>
<tr><td>Unix sockets</td><td>30</td><td><b>2949.118</b></td><td>86.19</td><td>2.92</td></tr>
</table>
</div>
<p>It gained up to 5-10%. Not so bad but I want to increase performance more better, because when we'll change simple http response from fastcgi request handler to real ASP.NET process method we will loose a lot of performance points.<p>
<p>One of the questions, answer to it could help to increase performance: is there a way to keep connection between nginx and fastcgi server instead of create it for every request? In above configurations nginx requires to close connection from fastcgi server to approve end of processing request. By the way FastCGI protocol has EndRequest command and keeping connection and using EndRequest command instead of closing connection could save huge amount of time in processing small requests. Fortunately, nginx has support of such feature, it's called keepalive. I enabled keepalive and set minimal number of open connections to 32 between nginx and my server. I choosen this number, because it was higher than the maximum number of concurrent requests I did with ab.</p>
<pre style="background-color: lightgray;">
upstream fastcgi_backend {
# server 127.0.0.1:9000;
server unix:/tmp/fastcgi.socket;
keepalive 32;
}
server {
listen 81;
server_name ssbench3;
access_log /var/log/nginx/ssbench3.log;
location / {
root /var/www/ssbench3/;
index index.html index.htm default.aspx Default.aspx;
fastcgi_index Default.aspx;
<b>fastcgi_keep_conn on;</b>
<b>fastcgi_pass fastcgi_backend;</b>
include /etc/nginx/fastcgi_params;
}
}
</pre>
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th>Nginx fastcgi settings</th><th>Concurency</th><th>Requests/Sec</th><th>Standart deviation</th><th>std dev %</th></tr>
<tr><td>TCP sockets. KeepAlive</td><td>10</td><td><b>3720.23</b></td><td>49.36</td><td>1.33</td></tr>
<tr><td>TCP sockets. KeepAlive</td><td>30</td><td><b>3907.85</b></td><td>80.48</td><td>2.06</td></tr>
<tr><td>Unix sockets. KeepAlive</td><td>10</td><td><b>4024.678</b></td><td>122.33</td><td>3.04</td></tr>
<tr><td>Unix sockets. KeepAlive</td><td>20</td><td><b>4458.714</b></td><td>72.87</td><td>1.63</td></tr>
<tr><td>Unix sockets. KeepAlive</td><td>30</td><td><b>4482.648</b></td><td>19.40</td><td>0.43</td></tr>
</table>
</div>
<p>Wow! That is a huge performance gains! Up to 50% compared with previous results! So I thought this is enough for proof of concept and I could start to create more faster fastcgi mono web server. To proove my thought I made simple .NET web server (without nginx), which always returns "Hello, world!" http response and test it with ab. It shows me ~5000 reqs/sec and this is close to my fastcgi proof of concept server</p>
<h1>HyperFastCGI server</h1>
<p>The target is clear now. I am going to create fast and reliable fastcgi server for mono, which can serve in second as much requests as possible and be stable. Unfortunatly it cannot be maden as just performance tweaking of current mono fastcgi server. The architecture needs to be changed to avoid cross-domain calls while processing requests.</p>
<p>What I did:
<ul>
<li>I wrote my own connection handling using async sockets. It should also decrease processor usage, but I did not compare servers by this parameter.</li>
<li>I totally rewrote FastCGI packets parsing, trying to decrease number of operations needed to handle them.</li>
<li>I changed the architecture by moving FastCGI packet handling to the same domain, where web application is located.</li>
<li>Currently there are no known memory leaks when processing requests.</li>
</ul>
This helped to improve performance of the server, here are the benchmarks:
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th>Url</th><th>Nginx fastcgi settings/Concurency</th><th>Requests/Sec</th><th>Standart deviation</th><th>std dev %</th></tr>
<tr><td>/hello.aspx</td><td>TCP keepalive/10</td><td><b>1404.174</b></td><td>24.93</td><td>1.78</td></tr>
<tr><td>/servicestack/json</td><td>TCP keepalive/10</td><td><b>1671.15</b></td><td>21.40</td><td>1.28</td></tr>
<tr><td>/servicestack/json</td><td>TCP keepalive/20</td><td><b>1718.158</b></td><td>41.46</td><td>2.41</td></tr>
<tr><td>/servicestack/json</td><td>TCP keepalive/30</td><td><b>1752.69</b></td><td>34.56</td><td>1.97</td></tr>
<tr><td>/servicestack/json</td><td>Unix sockets keepalive/10</td><td><b>1755.55</b></td><td>40.30</td><td>2.30</td></tr>
<tr><td>/servicestack/json</td><td>Unix sockets keepalive/20</td><td><b>1817.488</b></td><td>39.30</td><td>2.16</td></tr>
<tr><td>/servicestack/json</td><td>Unix sockets keepalive/30</td><td><b>1822.984</b></td><td>36.48</td><td>2.00</td></tr>
</table>
</div>
<p>The performance compared to original mono fastcgi server raised up serveral times! But this is not enough. While testing I found that threads created and destroyed very often. Creation of threads is expensive operation and I decided to increase minimal number of threads in threadpool. I added new option <b>/minthreads</b> to the server and set it to <b>/minthreads=20,8</b> which means that there will be at least 20 running working threads in threadpool and 8 IO threads (for async sockets communications).<p>
<p>/minthreads=20,8 benchmarks:</p>
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th>Url</th><th>Nginx fastcgi settings/Concurency</th><th>Requests/Sec</th><th>Standart deviation</th><th>std dev %</th></tr>
<tr><td>/servicestack/json</td><td>TCP keepalive/10</td><td><b>2041.246</b></td><td>23.18</td><td>1.14</td></tr>
<tr><td>/servicestack/json</td><td>TCP keepalive/20</td><td><b>2070.08</b></td><td>10.95</td><td>0.53</td></tr>
<tr><td>/servicestack/json</td><td>TCP keepalive/30</td><td><b>2093.526</b></td><td>24.27</td><td>1.16</td></tr>
<tr><td>/servicestack/json</td><td>Unix sockets keepalive/10</td><td><b>2156.754</b></td><td>37.74</td><td>1.75</td></tr>
<tr><td>/servicestack/json</td><td>Unix sockets keepalive/20</td><td><b>2182.774</b></td><td>42.96</td><td>1.97</td></tr>
<tr><td>/servicestack/json</td><td>Unix sockets keepalive/30</td><td><b>2268.676</b></td><td>28.39</td><td>1.25</td></tr>
</table>
</div>
<p>Such easy thing gives performance boost up to 20%!</p>
<p>Finally, I place all nginx configurations benchmarks in one chart</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIrNLmW92tWpAmKHh-FY7HKQa5WnVk41-xfkIDhxfTh3-sg0Czl9S60dwkwcc5H2ASKBH_iWkvWqcUHsVCKAOCPomxV9iMUo_8Tq5HjSdKPhFi1KfpQ90AHSLTygVRaGGzHmg_Zpiswhip/s1600/hyperfastcgi-ss.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIrNLmW92tWpAmKHh-FY7HKQa5WnVk41-xfkIDhxfTh3-sg0Czl9S60dwkwcc5H2ASKBH_iWkvWqcUHsVCKAOCPomxV9iMUo_8Tq5HjSdKPhFi1KfpQ90AHSLTygVRaGGzHmg_Zpiswhip/s1600/hyperfastcgi-ss.png" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBNNcxeqWSx10sAHXwBq14ljiQ-astrQuLi61iDa9raIeaEgrmhuwPXwayO8FSFvpIrVm2UEOchnWjO-uOtS1AY5mETS2M7NkKgxPcw10eF2PV7WATPac09f6H3QoZ3hWCFLC6KqQ_CAi9/s1600/hyperfastcgi-hello.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBNNcxeqWSx10sAHXwBq14ljiQ-astrQuLi61iDa9raIeaEgrmhuwPXwayO8FSFvpIrVm2UEOchnWjO-uOtS1AY5mETS2M7NkKgxPcw10eF2PV7WATPac09f6H3QoZ3hWCFLC6KqQ_CAi9/s1600/hyperfastcgi-hello.png" /></a></div>
<p>At the end I say that HyperFactCgi server can be found at <a href="https://github.com/xplicit/HyperFastCgi">github</a>. Currently it's not well tested, so use it at your own risk. But at least all ServiceStack(v3) WebHosts.Integration tests which passed with XSP passed with HyperFastCgi too. To install HyperFastCgi simply do:</p>
<pre style="background-color: lightgray">
git clone https://github.com/xplicit/HyperFastCgi.git
cd HyperFastCgi
./autogen.sh --prefix=/usr && make
sudo make install
</pre>
<p>configuration options are the same as mono-server-fastcgi plus few new parameters:
<br />/minthreads=nw,nio - minimal number of working and iothreads
<br />/maxthreads=nw,nio - maximal number of working and iothreads
<br />/keepalive=<true|false> - use keepalive feature or not. Default is true
<br />/usethreadpool=<true|false> - use threadpool for processing requests. Default is true
</p>
<p>If HyperFastCgi server be interesting to others for using it in production I am going to improve it. What can be improved:</p>
<ul>
<li>Support several virtual paths in one server.Currently only one web application is supported</li>
<li>Write unit tests to be sure, that the server is working properly</li>
<li>Catch and properly handle UnloadDomain() command from ASP.NET. This command is raised when web.config is changed or under some health checking by web-server. (Edit: already done)</li>
<li>Add management and monitoring application which shows server statistics (number of requests serverd and so on) and recommends performance tweaks</li>
<li>Additional performance improvements</li>
</ul>
<p>Links:
<br/><a href="https://github.com/xplicit/HyperFastCgi">HyperFastCgi server source code</a>
<br/><a href="2013/11/servicestack-performance-in-mono-p1.html">ServiceStack performance in mono. Part 1</a>
<br/><a href="2013/11/servicestack-performance-in-mono-p2.html">ServiceStack performance in mono. Part 2</a>
<br/>
<br/><a href="2013/12/servicestack-performance-on-mono-part4.html">ServiceStack performance in mono. Part 4</a>
</p>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com5tag:blogger.com,1999:blog-5803724860059155983.post-89739766498175906392013-11-17T14:10:00.002-08:002014-01-20T08:36:09.119-08:00ServiceStack performance in mono part2<div dir="ltr" style="text-align: left;" trbidi="on">
<br /></div>
In previous part I told about some performance enhancements which could be used with ServiceStack running over mono XSP web server. But nobody uses XSP in production environment, the most common use cases are nginx+mono-fastcgi and apache+mod_mono. But what is the performance in such environment? Will see it.
<h1>Configuration</h1>
<p><b>Apache</b></p>
If you want to use mono with apache, you have to install mod_mono for apache and configure it according to this <a href="http://www.mono-project.com/Mod_mono">article</a>. To install mod_mono in Ubuntu you can type
<pre>
<b>sudo apt-get install libapache2-mod-mono</b>
</pre>
after that you have to reinstall mono web server was compiled from sources. Change the directory to xsp source code and run <b>sudo make install</b> from it.
I am going to benchmark mono under apache in following configurations:
<ul>
<li>1. Direct access to static html file from apache without mono.</li>
<li>2. Get ServiceStack "Hello, World!" service throught apache2-mod-mono</li>
<li>3. Get static html file and "Hello, World!" aspx page throught apache2-mod-mono without ServiceStack.</li>
</ul>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVam0jNUzeepQDJKs3lMU-Z5YQDhQlN0dfcNFyI1thhaTJHANdyXbn-FZsmx1xdIrFtrrTbjVpZC_J1YEfBSUMlfeG2y4pWo0vzNElNXyc4A1P0crBI_cTHr_ovEMbrQcL1iVXiEXgCR9B/s1600/apache.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVam0jNUzeepQDJKs3lMU-Z5YQDhQlN0dfcNFyI1thhaTJHANdyXbn-FZsmx1xdIrFtrrTbjVpZC_J1YEfBSUMlfeG2y4pWo0vzNElNXyc4A1P0crBI_cTHr_ovEMbrQcL1iVXiEXgCR9B/s1600/apache.png" /></a></div>
To manage this I use following config in /etc/apache2/http.conf. For direct static file access I placed hello.html in the web server root (/var/www)
<pre style="background-color: lightgray">
NameVirtualHost ssbench3:80
NameVirtualHost ssbench2:80
<VirtualHost ssbench3:80>
ServerName ssbench3
DocumentRoot /var/www/ssbench3
# MonoPath default "/usr/bin/mono/2.0"
MonoServerPath ssbench3 /usr/bin/mod-mono-server4
AddMonoApplications ssbench3 "ssbench3:/:/var/www/ssbench3"
<location />
MonoSetServerAlias ssbench3
Allow from all
Order allow,deny
SetHandler mono
</location>
</VirtualHost>
<VirtualHost ssbench2:80>
ServerName ssbench2
DocumentRoot /var/www/ssbench2
# MonoPath default "/usr/bin/mono/2.0"
MonoServerPath ssbench2 /usr/bin/mod-mono-server4
AddMonoApplications ssbench2 "ssbench2:/:/var/www/ssbench2"
<location />
MonoSetServerAlias ssbench2
Allow from all
Order allow,deny
SetHandler mono
</location>
</VirtualHost>
</pre>
<p><b>Nginx</b></p>
<p>Configuration of Nginx is similar to Apache, differences are only in transport between mono and front-end web server. Apache uses mod-mono-server while nginx uses fastcgi-mono-server. Also, you may note that I added one additional configuration: nginx as proxy to xsp4.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjY23PPwh4S-gV8u1nNXjaNrKmhEedey8Q23FNhQ0heFnk8LmHc3HIuOSrOgH1jfFZjgkRv4jRMdknqVovLUKzVplebowo0xFhmv35zeZBMRL9SSIBelBBZQb8VXazQUII7hXaxuu-KB78L/s1600/nginx.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjY23PPwh4S-gV8u1nNXjaNrKmhEedey8Q23FNhQ0heFnk8LmHc3HIuOSrOgH1jfFZjgkRv4jRMdknqVovLUKzVplebowo0xFhmv35zeZBMRL9SSIBelBBZQb8VXazQUII7hXaxuu-KB78L/s1600/nginx.png" /></a></div>
<p>To configure nginx I followed this <a href="http://www.mono-project.com/FastCGI_Nginx">guide</a>. I added following lines to <b>/etc/nginx/fastcgi_params</b>
<pre style="background-color: lightgray">
fastcgi_param HTTP_HOST $host;
fastcgi_param PATH_INFO "";
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
</pre>
<p>And added virtual hosts to <b>/etc/nginx/sites-enabled/default</b></p>
<pre style="background-color: lightgray">
server {
listen 81;
server_name ssbench1;
access_log /var/log/nginx/ssbench1.log;
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 81;
server_name ssbench2;
access_log /var/log/nginx/ssbench2.log;
location / {
root /var/www/ssbench2/;
index index.html index.htm default.aspx Default.aspx;
fastcgi_index Default.aspx;
fastcgi_pass 127.0.0.1:9000;
include /etc/nginx/fastcgi_params;
}
}
server {
listen 81;
server_name ssbench3;
access_log /var/log/nginx/ssbench3.log;
location / {
root /var/www/ssbench3/;
index index.html index.htm default.aspx Default.aspx;
fastcgi_index Default.aspx;
fastcgi_pass 127.0.0.1:9000;
include /etc/nginx/fastcgi_params;
}
}
</pre>
<p>after that, I ran the command:</p>
<b>fastcgi-mono-server4 /applications=ssbench2:/:/var/www/ssbench2/,ssbench3:/:/var/www/ssbench3 /socket=tcp:127.0.0.1:9000</b>
<p>Also, I ran xsp4 server hosted ServiceStack on port 8080</p>
<p>EDIT: after the post was written, I additionaly benchmarked several configuration were not mentioned in the first version, they are:
<ul>
<li>Nginx as frontend proxy to apache server with mod-mono</li>
<li>Self-hosted ServiceStack instance based on two classes: AppHostHttpListenerBase and AppHostHttpListenerLongRunningBase. How to create self-hosted ServiceStack you can read
in <a href="https://github.com/ServiceStack/ServiceStack/wiki/Run-ServiceStack-as-a-daemon-on-Linux">ServiceStack wiki</a>. Also, you can look in test <a href="https://github.com/xplicit/monoweb-bench">source code</a> to get additional details</li>
<li>Nginx as frontend proxy to self-hosted ServiceStack.</li>
<li>Nginx plus <a href="https://github.com/xplicit/HyperFastCgi">HyperFastCgi</a> (is a new fastcgi server I written. Replacement of mono-webserver-fastcgi)</li>
</ul>
</p>
<h1>Benchmark results</h1>
<p>Before I'll print the results I want to say a couple words about my expectations. What did I expect? At first, I predicted that nginx be winner of serving static html pages. It was obvious. Secondly, I thought that nginx+ServiceStack get slightly better results versus Apache+ServiceStack and maybe XSP+ServiceStack due to nginx async behaviour and lower processor usage. Also, I thought that performance difference between Apache+ServiceStack and XSP+ServiceStack should be minimal. They are both use the same threading model and what could I expect a little overhead in apache<->mod-mono communications. But... Here are the results</p>
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th width="25%">Configuration</th><th>requests/sec</th><th>Standart deviation</th><th>std dev %</th><th>Comments</th></tr>
<tr><td>Apache2 direct file</td><td><b>7129.95</b></td><td>217.57</td><td>3.05</td><td></td></tr>
<tr><td>Apache2+mod_mono+ServiceStack</td><td><b>1314.30</b></td><td>22.40</td><td>1.70</td><td></td></tr>
<tr><td>Apache2+mod_mono hello.html</td><td><b>924.02</b></td><td>12.82</td><td>1.39</td><td></td></tr>
<tr><td>Apache2+mod_mono hello.aspx</td><td><b>----</b></td><td>----</td><td>---</td><td>Memory Leaks, Crashes</td></tr>
<tr><td>Nginx direct file</td><td><b>10458.71</b></td><td>147.28</td><td>1.41</td><td></td></tr>
<tr><td>Nginx+fastcgi-server+ServiceStack</td><td><b>571.36</b></td><td>8.81</td><td>1.54</td><td>Memory Leaks</td></tr>
<tr><td>Nginx+fastcgi-server hello.html</td><td><b>409.48</b></td><td>9.14</td><td>2.23</td><td>Memory Leaks</td></tr>
<tr><td>Nginx+fastcgi-server hello.aspx</td><td><b>458.55</b></td><td>9.89</td><td>2.16</td><td>Memory Leaks, Crashes</td></tr>
<tr><td>Nginx+proxy to Apache2+mod-mono+ServiceStack</td><td><b>1143.82</b></td><td>8.49</td><td>0.74</td><td></td></tr>
<tr><td>Nginx+proxy to self-hosted ServiceStack (AppHost HttpListenerBase)</td><td><b>1993.82</b></td><td>17.62</td><td>0.88</td><td></td></tr>
<tr><td>Nginx+proxy to self-hosted ServiceStack (AppHost HttpListenerLongRunningBase)</td><td><b>1664.94</b></td><td>27.45</td><td>1.65</td><td></td></tr>
<tr><td>Nginx+HyperFastCgi (tcp keepalive)+ServiceStack</td><td><b>2041.25</b></td><td>23.18</td><td>1.14</td><td><a href="http://forcedtoadmin.blogspot.com/2013/12/servicestack-performance-in-mono-p3.html">See more info</a></td></tr>
<tr><td>Nginx+proxy to xsp4+ServiceStack</td><td><b>1402.33</b></td><td>45.42</td><td>3.24</td><td>Unstable Results, Errors</td></tr>
<tr><td>xsp4+ServiceStack</td><td><b>2246.51</b></td><td>21.31</td><td>0.94</td><td></td></tr>
<tr><td>Self-hosted ServiceStack (AppHost HttpListenerBase)</td><td><b>2697.1</b></td><td>30.1</td><td>1.12</td><td></td></tr>
<tr><td>Self-hosted ServiceStack (AppHost HttpListenerLongRunningBase)</td><td><b>2313.11</b></td><td>33.14</td><td>1.43</td><td></td></tr>
</table>
</div>
<p></p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicR55Nn0Gq0ge91A35bEs7BtKMqLC16tgU70TSA_yg8HZP-Go9TfpGkAZJww_ukiZIC4AQsclkbSh7-WgG8vbsbua54BAxalJcn9IdRbqTfWvqaMLmawkN3uuqUlZ6dABWctRVUT2bx6iv/s1600/static-file.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicR55Nn0Gq0ge91A35bEs7BtKMqLC16tgU70TSA_yg8HZP-Go9TfpGkAZJww_ukiZIC4AQsclkbSh7-WgG8vbsbua54BAxalJcn9IdRbqTfWvqaMLmawkN3uuqUlZ6dABWctRVUT2bx6iv/s1600/static-file.png" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicFoEH-kNbAZ9av4Fht-yCYcYFBmmDThfp1QgQ4EsMPjipMR5bPIGe89nwgZ0CTx0p_PhOXYCvt7kxWgjl6X3VPYv19xt7p3N0_dX0493QfpmMKPgYlPL16g3kvunT43EzHkU7XwE0ymxA/s1600/servicestack2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicFoEH-kNbAZ9av4Fht-yCYcYFBmmDThfp1QgQ4EsMPjipMR5bPIGe89nwgZ0CTx0p_PhOXYCvt7kxWgjl6X3VPYv19xt7p3N0_dX0493QfpmMKPgYlPL16g3kvunT43EzHkU7XwE0ymxA/s1600/servicestack2.png" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBW8-JJbaEfKzG_z-WflJ1063g1_l3fK_BB-bfiIJcPyHzJRR_bLSiOJFGs1C6bHuUxbtgzfP3v-8244OpjDAnHkEkdP4w_tIapnscvRVWHt-7XR8B6sBAKYI0ZnCBwyHcYJIv8qeOS-KY/s1600/getaspxpage.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBW8-JJbaEfKzG_z-WflJ1063g1_l3fK_BB-bfiIJcPyHzJRR_bLSiOJFGs1C6bHuUxbtgzfP3v-8244OpjDAnHkEkdP4w_tIapnscvRVWHt-7XR8B6sBAKYI0ZnCBwyHcYJIv8qeOS-KY/s1600/getaspxpage.png" /></a></div>
<p>What can we see? First place in serving ServiceStack takes xsp4. Then goes Apache+mod_mono and the last one is Nginx+fastcgi-server which is four times worse then the winner. I did not mentioned here Nginx+proxy xsp4 configuration because during test execution in half of test runs I get errors when receive json data. There were not so many errors (~1500 on 100 000 requests), but they were exist and this was the reason to drop away nginx+xsp4 configuration from competition. By the way performance result for the configuration slightly better than apache+mod_mono and much better than Nginx+fastgi-server.</p>
<p>Also I did not include HyperFastCgi server in the chart which shows good performance, because it was created after these benchmarks have done. Benchmarks of the Nginx+HyperFastCgi server you can find in <a href="http://forcedtoadmin.blogspot.com/2013/12/servicestack-performance-in-mono-p3.html">next part</a></p>
<p>As serving static html files the first place takes Nginx as expected, second by Apache and after them goes all other configuration: xsp4 (you can see test results for static xsp4 html serving in <a href="http://forcedtoadmin.blogspot.com/2013/11/servicestack-performance-in-mono-p1.html">previous post</a>), Apache+mod_mono, Nginx+fastcgi. They all are really very slow comparing with Nginx or Apache.</p>
<p>For .aspx page I could not get reliable results. At first, there are memory leaks in mono web server during processing the aspx pages and they are possible a reason of crashes I've got. I could only get ~20000 requests with Nginx+fastcgi and several thouthands request with Apache+mod_mono before mono hanged or got SIGSEGV. I suspect that the reason of these faults are changes of hadling and spawning threads and changes performed in mono GC. Hope that this instablity will be fixed in next mono release.</p>
<p>Also I've mentioned that fastcgi-mono-server produced a huge memory leaks during runs. After processing 100 000 requests it was used about 600M of memory! With such configuration you cannot serve large amount or requests without regular restarting of the server. Also performance of fastcgi-mono-server is extremely slow compared to mod-mono-apache. What is going on in the server? I am going to look inside it in the <a href="http://forcedtoadmin.blogspot.com/2013/12/servicestack-performance-in-mono-p3.html">next posts</a></p>
Links:
<ul>
<li><a href="http://forcedtoadmin.blogspot.com/2013/11/servicestack-performance-in-mono-p1.html">ServiceStack performance in mono part 1</a></li>
<li><a href="http://forcedtoadmin.blogspot.com/2013/12/servicestack-performance-in-mono-p3.html">ServiceStack performance in mono part 3</a></li>
<li><a href="http://forcedtoadmin.blogspot.com/2013/12/servicestack-performance-in-mono-part4.html">ServiceStack performance in mono part 4</a></li>
</ul>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com2tag:blogger.com,1999:blog-5803724860059155983.post-78132805140820392592013-11-06T14:46:00.000-08:002014-01-20T08:34:13.019-08:00Servicestack performance in mono <div dir="ltr" style="text-align: left;" trbidi="on">
<br /></div>
<p>When I read ServiceStack channel on Google+ I found an <a href="http://www.techempower.com/benchmarks/#section=data-r7&hw=i7&test=json">benchmark</a> which said that ServiceStack serialization under mono is very slow. That is discouraged me because I thought that SS demonstrated very good json serialization performance versus other .net json serialization frameworks. Maybe testers used wrong configuration or bad test case? The questions were opened for me and I decided to check it by myself.</p>
<h1>Preparing environment and measurement metrics</h1>
<p>
My environment:<br />
CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz<br />
OS: Ubuntu 12.04 32 bit<br />
Mono Runtime Engine version 3.2.5 (master/6a9c585 Fri Oct 25 01:56:00 NOVT 2013)<br />
</p>
<p>
I have built mono from the github sources as described <a href="http://forcedtoadmin.blogspot.ru/2013/09/ubuntu-1304-compiling-mono-and.html">here</a>. As measurement tool I am going to use <b><a href="http://httpd.apache.org/docs/2.2/programs/ab.html">ab</a></b> from apache2-utils package. If you want to install <b>ab</b>, you can write <b>apt-get install apache2-utils</b>. I am going to run ab 5 times, performing 100000 url gets each time and get the result mean. Every run I will use 10 threads to run request in parallel.</p>
<p>The command looks like this: <b>ab -n 100000 -c 10 http://host:port/url</b></p>
<p>ServiceStack was compiled from github v3 branch in mono release build for Mono/.NET 4.0 platform</p>
<p>As soon as environment is prepared I have to create test case. I choose to create very simple ServiceStack service similar to benchmarks which returns "Hello, world!" message. You can find <a href="https://github.com/xplicit/monoweb-bench">source code at github</a>. Also I would like to get some metrics for comparison. I choose to create simple ASP.NET application with "Hello, world" .aspx and .html files and benchmark them.
</p>
<h1>Start benchmarking</h1>
<p>
All tests I made from localhost. This reduces overhead for network traffic, but takes processor resources what penalties to absolute results. But difference is not so much for mono benchmarks, so I decide to choose more stable results rather than higher absolute values (which could be more higher when run at faster processor unit)
</p>
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th>Url</th><th>Web server</th><th>requests/sec</th><th>Standart deviation</th><th>std dev %</th></tr>
<tr><td>hello.aspx</td><td>xsp4</td><td><b>1659.238</b></td><td>79.39</td><td>4.78</td></tr>
<tr><td>hello.html</td><td>xsp4</td><td><b>1004.428</b></td><td>34.47</td><td>3.43</td></tr>
<tr><td>hello.html</td><td>apache2</td><td><b>7129.956</b></td><td>76.80</td><td>1.08</td></tr>
<tr><td>Servicestack</td><td>xsp4</td><td><b>1913.746</b></td><td>34.84</td><td>1.82</td></tr>
</table>
</div>
<p>Amazing results. You can see, that serving static html page in apache2 has the better performance than do it with xsp4, what was predictable, but not seven-times difference! Also, apsx page serves 1.6x faster than static html. Do you expect this? I did not.</p>
<p>Also, when I ran these benchmarks, I found that xsp4 grew in memory very fast when serving apsx pages, and after some limit (~265m) killed threads and produced deny of service error. Seems there is some memory leak in mono web server</p>
<p>But our goal is ServiceStack. You can see, that ServiceStack runs faster than aspx page or static html page in xsp4, but not so fast as apache2 static html. Why is so slow? Can we improve the performance? Answers to these questions you will find in next chapters</p>
<h1>Looking inside ServiceStack runs</h1>
<p>Why ServiceStack runs on mono not so fast as we can expect? To find answers to the question I turned up profile mode for xsp4 and look into generated profiles. To do it, before running xsp4 execute following command in shell:</p>
<p><b>export MONO_OPTIONS="--profile=log:noalloc,output=../output.mlpd"</b></p>
<p><b>log:noalloc</b> means that we don't want to gather info about allocated objects. We are interested only in method calls timing<br />
<b>output=../output.mlpd</b> sets the name of file for profiling information be gathered. Please note that we set parent directory instead of current for output file. Web server watches for changes in current directory and if we set it web server will get a lot of notification messages that the directory has changed and it draws back on the performance.</p>
<p>After that run the commands:</p>
<p><b>ab -n 500 -c 10 http://host:port/url</b><br />
<b>mprof-report output.mlpd > profile.txt</b></p>
<p>500 method calls is enough for getting profiling information, <b>mprof-report</b> produces human-readable form for the info.</p>
<pre style="margin-left:-25px; margin-right: -25px;overflow-x: scroll">
Method call summary
Total(ms) Self(ms) Calls Method name
56244 8 1581 (wrapper runtime-invoke) <Module>:runtime_invoke_void__this___object (object,intptr,intptr,intptr)
54344 3 500 Mono.WebServer.XSPWorker:RunInternal (object)
54240 3 500 (wrapper remoting-invoke-with-check) Mono.WebServer.XSPApplicationHost:ProcessRequest (int,System.Net.IPEndPoint,System.Net.IPEndPoint,string,string,string,string,byte[],string,intptr,Mono.WebServer.SslInformation)
54237 5 500 Mono.WebServer.XSPApplicationHost:ProcessRequest (int,System.Net.IPEndPoint,System.Net.IPEndPoint,string,string,string,string,byte[],string,intptr,Mono.WebServer.SslInformation)
53513 4 500 Mono.WebServer.BaseApplicationHost:ProcessRequest (Mono.WebServer.MonoWorkerRequest)
53390 1 500 Mono.WebServer.MonoWorkerRequest:ProcessRequest ()
53226 5 500 System.Web.HttpRuntime:ProcessRequest (System.Web.HttpWorkerRequest)
53173 4 500 System.Web.HttpRuntime:RealProcessRequest (object)
53157 14 500 System.Web.HttpRuntime:Process (System.Web.HttpWorkerRequest)
44442 14 500 System.Web.HttpApplication:System.Web.IHttpHandler.ProcessRequest (System.Web.HttpContext)
44403 18 500 System.Web.HttpApplication:Start (object)
41356 416 500 System.Web.HttpApplication:Tick ()
40940 148 500 System.Web.HttpApplication/<Pipeline>c__Iterator1:MoveNext ()
17158 10 500 ServiceStack.WebHost.Endpoints.Support.EndpointHandlerBase:ProcessRequest (System.Web.HttpContext)
17136 39 500 ServiceStack.WebHost.Endpoints.RestHandler:ProcessRequest (ServiceStack.ServiceHost.IHttpRequest,ServiceStack.ServiceHost.IHttpResponse,string)
11422 25 500 System.Web.HttpApplication:PipelineDone ()
11047 12 500 System.Web.HttpApplication:OutputPage ()
11033 53 500 System.Web.HttpResponse:Flush (bool)
<b>10996</b> 74 <b>2108 System.Web.Configuration.WebConfigurationManager:GetSection (string,string,System.Web.HttpContext)</b>
<b>10646</b> 5 <b>1004 System.Configuration.Configuration:GetSectionInstance (System.Configuration.SectionInfo,bool)</b>
<b>9401</b> 569 <b>130811 System.Collections.Hashtable:GetHash (object)</b>
<b>9252</b> 596 <b>106531 System.Collections.Hashtable:get_Item (object)</b>
8405 11 500 System.Web.HttpApplicationFactory:GetApplication (System.Web.HttpContext)
8082 1 500 System.Web.HttpApplication:GetHandler (System.Web.HttpContext,string)
8081 9 500 System.Web.HttpApplication:GetHandler (System.Web.HttpContext,string,bool)
<b>6861</b> 1760 <b>25111 Mono.Globalization.Unicode.SimpleCollator:CompareInternal (string,int,int,string,int,int,bool&,bool&,bool,bool,Mono.Globalization.Unicode.SimpleCollator/Context&)</b>
<b>6707</b> 8 <b>2500 ServiceStack.WebHost.Endpoints.Extensions.HttpRequestWrapper:get_HttpMethod ()</b>
<b>6699</b> 13 <b>500 ServiceStack.WebHost.Endpoints.Extensions.HttpRequestWrapper:Param (string)</b>
</pre>
<p>I bold suspicious methods with both long execution time and large number of calls. As you can see only one is from ServiceStack code it is a property HttpRequestWrapper.HttpMethod. So what can we do, how can we increase performance, when most of long executing calls are related to mono and mono web server?</p>
<p>Lets have a look what methods call long-executing methods. To get info about backtraces, you should run command</p>
<b>mprof-report --traces ../output.mlpd > profile-traces.txt</b>
<pre style="margin-left:-15px; margin-right: -15px; overflow-x: scroll">
10996 74 2108 System.Web.Configuration.WebConfigurationManager:GetSection (string,string,System.Web.HttpContext)
500 calls from:
System.Web.HttpApplication:Start (object)
System.Web.HttpApplication:Tick ()
System.Web.HttpApplication/<Pipeline>c__Iterator1:MoveNext ()
System.Web.HttpApplication:GetHandler (System.Web.HttpContext,string)
System.Web.HttpApplication:GetHandler (System.Web.HttpContext,string,bool)
System.Web.HttpApplication:LocateHandler (System.Web.HttpRequest,string,string)
500 calls from:
System.Web.HttpRuntime:RealProcessRequest (object)
System.Web.HttpRuntime:Process (System.Web.HttpWorkerRequest)
System.Web.HttpApplication:System.Web.IHttpHandler.ProcessRequest (System.Web.HttpContext)
System.Web.HttpApplication:Start (object)
System.Web.HttpApplication:PreStart ()
System.Web.Configuration.WebConfigurationManager:GetSection (string)
500 calls from:
Mono.WebServer.XSPWorkerRequest:SendHeaders ()
Mono.WebServer.XSPWorkerRequest:GetHeaders ()
Mono.WebServer.MonoWorkerRequest:get_HeaderEncoding ()
System.Web.HttpResponse:get_HeaderEncoding ()
System.Web.Configuration.WebConfigurationManager:SafeGetSection (string,System.Type)
System.Web.Configuration.WebConfigurationManager:GetSection (string)
500 calls from:
System.Web.HttpApplication:System.Web.IHttpHandler.ProcessRequest (System.Web.HttpContext)
System.Web.HttpApplication:Start (object)
System.Web.HttpApplication:Tick ()
System.Web.HttpApplication/<Pipeline>c__Iterator1:MoveNext ()
System.Web.HttpApplication/<RunHooks>c__Iterator0:MoveNext ()
System.Web.Security.UrlAuthorizationModule:OnAuthorizeRequest (object,System.EventArgs)
</pre>
<p>Look at the first backtrace. Don't you think that locating handler in web.config for every request looking strange? I think, all info about handlers should be loaded only once at application start and then reused for each request. If you look into <a href="https://github.com/mono/mono/blob/master/mcs/class/System.Web/System.Web/HttpApplication.cs">mono code</a> you will see that handlers are cached by mono, but why is ServiceStack handler is not cached?</p>
The answer in these lines of code:
<pre name="code" class="brush:csharp">
HttpHandlersSection httpHandlersSection = WebConfigurationManager.GetSection ("system.web/httpHandlers", req.Path, req.Context) as HttpHandlersSection;
ret = httpHandlersSection.LocateHandler (verb, url, out allowCache);
IHttpHandler handler = ret as IHttpHandler;
if (allowCache && handler != null && handler.IsReusable)
cache [id] = ret;
</pre>
<p>To be cachable ServiceStack factory handler must implement IHttpHandler interface has IsReusable property set to 'true' and be allowed to cache. In mono source code you can find that allowCache means handler path in configuration section must not be "*" but it allowed to be "servicestack*" for example. So I changed httpHandlers section in web.config by changing attribute path="*" to path="servicestack*" and <a href="https://github.com/xplicit/ServiceStack/commit/821dbae4545ce660ad187b6bfb5cc8f2314caa71">added implementation</a> of IHttpHandler interface to ServiceStackHttpHandlerFactory</p>
<pre name="code" class="brush:csharp">
#region IHttpHandler implementation
void IHttpHandler.ProcessRequest(HttpContext context)
{
throw new NotImplementedException();
}
bool IHttpHandler.IsReusable
{
get
{
return true;
}
}
#endregion
</pre>
<p>Then I recompiled ServiceStack and performed new benchmarks</p>
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th>Url</th><th>Web server</th><th>requests/sec</th><th>Standart deviation</th><th>std dev %</th></tr>
<tr><td>Servicestack</td><td>xsp4</td><td><b>1913.746</b></td><td>34.84</td><td>1.82</td></tr>
<tr><td>Servicestack reusable handler factory</td><td>xsp4</td><td><b>2003.238</b></td><td>35.39</td><td>1.77</td></tr>
</table>
</div>
<p>Performance is increased by 4.68%. Not so much, but this just a start</p>
<p>In profiler we see that GetSection now called 1624 times instead of 2108</p>
<pre>
14158 33 1624 System.Web.Configuration.WebConfigurationManager:GetSection (string,string,System.Web.HttpContext)
</pre>
<p>Now we will try to remove another overheads of GetSection calling. We can see that this method is called from HttpApplication.PreStart method and HttpResponse.HeaderEncoding property. Looking into source code brings a solution: get globalization section only once and than reuse it. This can be done only by changing mono sources. I <a href="https://github.com/xplicit/mono/commit/380f74d31a1c21cfff64df4d6644f51b7492ed37">did it</a> and get results:</p>
<div style="text-align: center">
<table border="1" cellspacing="0" cellpadding="4">
<tr><th>Url</th><th>Web server</th><th>requests/sec</th><th>Standart deviation</th><th>std dev %</th></tr>
<tr><td>Servicestack (mono 9eda1b4)</td><td>xsp4</td><td><b>1958.37</b></td><td>21.54</td><td>1.10</td></tr>
<tr><td>Servicestack (patched mono 9eda1b4)</td><td>xsp4</td><td><b>2025.316</b></td><td>21.56</td><td>1.06</td></tr>
</table>
</div>
<p>Performance additionally gained 3.46%. Unfortunately before the patch I have had to update mono to revision 9eda1b4 and this dropped performance by 50 points from previous results</p>
Now profiler shows 611 calls of GetSection and 7500ms what is much better
<pre>
7516 13 611 System.Web.Configuration.WebConfigurationManager:GetSection (string,string,System.Web.HttpContext)
</pre>
<p><b>Please note that this hack will work only if you don't use different globalization sections in web.config files are located in subdirectories of your site. If you site requires to use own globalizations for each path, don't use this hack</b></p>
<p>Now lets look to the HashTable:GetHash method. This method is fast, but it called too much times. It is not simply to reduce number of calls, but some hints could help. For example: add key in appSetting section of web.config file and you will reduce several thousands of GetHash calls but you should know this does not boost performance to any significant value</p>
<pre name="code" class="brush:xml">
<add key="MonoAspnetInhibitSettingsMap" value="true"/>
</pre>
<p>This key is used by mono to map some config sections to another one. If you do not use RoleMembership functionality or SqlServerCache you can disable mappings by adding the key. For more information you can read an article <a href="http://www.mono-project.com/ASP.NET_Settings_Mapping">http://www.mono-project.com/ASP.NET_Settings_Mapping</a>
<p>..To be continued</p>
<ul>
<li><a href="http://forcedtoadmin.blogspot.com/2013/11/servicestack-performance-in-mono-p2.html">ServiceStack performance in mono part 2</a></li>
<li><a href="http://forcedtoadmin.blogspot.com/2013/12/servicestack-performance-in-mono-p3.html">ServiceStack performance in mono part 3</a></li>
</ul>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com2tag:blogger.com,1999:blog-5803724860059155983.post-71629629351269762742013-10-04T16:06:00.000-07:002014-01-20T08:41:51.259-08:00ServiceStack десериализация inherited объектов<div dir="ltr" style="text-align: left;" trbidi="on">
Есть абстрактный класс Item и выведенный из него DressItem
<pre name="code" class="brush: csharp">
public abstract class Item
{
public int ItemId {get; set;}
public string Name {get; set;}
}
public class DressItem : Item
{
}
</pre>
Класс Item сделан абстрактным, для того, чтобы ServiceStack умел сериализовать Item с указанием типа, т .е. если у нас есть конструкция:
<pre name="code" class="brush: csharp">
Item item=new DressItem(){ItemId=1,Name="Пальто"};
</pre>
то при сериализации в ServiceStack данная конструкция преобразуется в json:
<pre name="code" class="brush: javascript">
{"__type": "MyNameSpace.DressItem, MyAssembly","ItemId":1,"Name":"Пальто"}
</pre>
и при десериализации из такого json в Item у нас создастся DressItem.
Например, если сделаем сервис редактирования Items
<pre name="code" class="brush: csharp">
public class ItemRequest
{
public Item Item {get; set;}
}
public class ItemEditService : Service
{
public object Put(ItemRequest request)
{
DataContext.UpdateItem(request.Item);
return true;
}
}
</pre>
То при вызове метода Put с вышеприведеным json у нас в request.Item окажется объект типа DressItem. Но вот что, интересно, если json немного изменить, сделав его таким:
<pre name="code" class="brush: javascript">
{"ItemId":1, "__type": "MyNameSpace.DressItem, MyAssembly", "Name":"Пальто"}
</pre>
т. е. просто поставив ItemId вперед, то в request.Item мы получим null! Получается, при передаче данных сервису "__type" всегда должен идти первым, иначе десериализация не сработает. А ведь это не всегда возможно, ведь мы не можем знать, каким образом будут вести себя сериализаторы объектов... Довольно странное поведение сервисстэка, возможно это ошибка.
<br /></div>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-13001809518512213902013-09-20T08:33:00.000-07:002014-01-20T08:46:02.350-08:00Ubuntu 13.04. Compiling mono and monodevelop from sources<div dir="ltr" style="text-align: left;" trbidi="on">
This page is actual for mono 3.2.3 and monodevelop 4.1.7
If you want to install and run the latest version of mono and monodevelop, run following script
<pre name="code">
sudo apt-get install git mono-mcs mono-gmcs autoconf libtool g++ libglib2.0-cil-dev libgtk2.0-cil-dev libglade2.0-cil-dev libgnome2.0-cil-dev libgconf2.0-cil-dev
mkdir mono
cd mono
git clone https://github.com/mono/mono.git
git clone https://github.com/mono/monodevelop.git
git clone https://github.com/mono/xsp.git
git clone https://github.com/mono/mono-addins.git
cd mono
./autogen.sh --prefix=/usr && make && sudo make install
cd ../mono-addins
./autogen.sh --prefix=/usr && make && sudo make install
cd ../xsp
./autogen.sh --prefix=/usr && make && sudo make install
cd ../monodevelop
./configure --prefix=/usr && make && sudo make install
</pre>
<br /></div>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-86673113865403400922013-05-19T03:14:00.000-07:002014-01-20T08:39:54.998-08:00ServiceStack и сериализация struct<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
Столкнулся с такой проблемой: при сериализации ServiceStack'ом .NET структур, вместо значений свойств в Json формате после сериализации получал название типа данных данной структуры.
<pre name="code" class="brush: csharp">
public struct Point
{
public int X { get; set;}
public int Y { get; set;}
}
Point p = new Point (){X=10,Y=20};
JsonSerializer.SerializeToString<Point> (p);
</pre>
получал "MyNameSpace.Point" вместо "{X=10,Y=20}". В <a href="http://www.servicestack.net/docs/text-serializers/json-serializer">описании</a> сериализатора ServiceStack есть заметка, что структуры сериализуются с помощью метода ToString(), но писать для каждой структуры свой сериализатор в ToString() - это чересчур. Но, к счастью, нашлось решение. Оказалось, что достаточно добавить строчку
<pre name="code" class="brush: csharp">
JsConfig<Point>.TreatValueAsRefType = true;
</pre>
После чего сериализация стала работать как ожидается.
</div>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-49601001690200328782013-03-15T04:06:00.000-07:002013-03-15T04:06:35.711-07:00Assemblies hell in mono<div dir="ltr" style="text-align: left;" trbidi="on">
Столкнулся с такой проблемой: monodevelop, при раскрытии ссылки References проекта не находил некоторые системные сборки, хотя они были установлены. Когда я их начинал добавлять через add reference, то в packages их почему-то не было, хотя в /usr/lib/pkgconfig соответсвующие *.pc файлы присутствовали. Стало это происходить, когда я пытался скомпилировать системные библиотеки из исходников (gtk-sharp, gnome-sharp и прочие).
<br /><br />
Сначала я думал, что просто что-то сломал в системе и решил полностью переустановить mono. Сделал aptitude purge mono-runtime, удалил все файлы /usr/lib/mono, /usr/local/lib/mono, а также некоторые *.pc файлы, относящиеся к gtk, которые я нашел в /usr/lib/pkgconfig и потом установил все заново (sudo aptitude install monodevelop mono-gmcs), но после переинсталляции все стало еще хуже: некоторые сборки, ссылающиеся на gtk-sharp почему-то вообще перестали загружаться, сообщая что mono заресолвило пакадж gtk-sharp как gtk-sharp-3-0.pc (хотя такого файла не должно было быть в системе!) и ищет библиотеки по путям, которые я давно удалил.
<br /><br />
Я все перерыл, пытаясь найти этот файл, но так и не нашел. Так как в чудеса я не верю, то возникло предположение, что где-то существует кэш, в который mono скинуло ссылки на старые assemblies и теперь подтягивает их из кэша.
<br /><br />
Залез в исходники mono и после просмотра обнаружил, что при распознавании сборок действительно сначала данные тянутся из файлика ~/.config/xbuild/pkgconfig-cache-2.xml и только если там данных нет, то тогда уже смотрятся директории /usr/lib/pkgconfig и другие
</br /><br />
Я грохнул этот файл и все мои проблемы разрешились.
<br /><br />
Соответсвенно, получается такое правило:
если вы собираете mono или библиотеки mono из исходников, после каждого make install грохайте файлик ~/.config/xbuild/pkgconfig-cache*.xml, иначе можете получить проблемы с подгрузкой новых сгенереных сборок
<br /></div>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-9759595505957032252013-01-21T07:08:00.000-08:002013-02-12T03:17:29.033-08:00Action как замена копипаста<div dir="ltr" style="text-align: left;" trbidi="on">
Даже начинающие программисты знают, что если один и тот-же код встречается в двух разных местах, то стоит задуматься над тем, чтобы вынести этот код в отдельную функцию дабы не плодить копи-паст.<br />
Но что делать, когда код почти полностью совпадает, но где-нибудь в середине отличается одной-двумя строчками от которых ну никак нельзя избавиться?<br />
<br />
Вот, например, есть у нас список игроков в MMO игре:<br />
<br />
<pre name="code" class="brush: csharp:nocontrols">List<Player> players;</pre>
<br />
Теперь представим, где-нибудь в координатах x,y произошло некоторое событие, например, взорвалась граната и теперь надо пересчитать хиты игроков, находящихся в зоне поражения и послать сообщения об их полученом дамаге игрокам.<br />
<br />
<pre name="code" class="brush: csharp">
private void ExplodeGrenade(float x, float y, float range)
{
//сохраним константу в локальную переменную, в реальности здесь может быть
//гораздо более сложный код
float damage=Grenade.Damage;
foreach(Player player in players)
{
//если игрок попадает в квадрат поражения, то пересчитаем ему<br />
//уровень урона, чем дальше, тем меньше <br />
if (player.X<x+range && player.X>x-range
&& player.Y<y+range && player.Y> y+ range)
{
//фунция Distance считает расстояние между двумя точками
float playerDamage=damage/(Distance(player.X,player.Y,x,y)+1);
//пошлем игроку сообщение о том, что он получил урон
player.SendMessage(Message.TakeDamage,playerDamage);
}
}
}</pre>
Теперь представим другую операцию, например игрок переместился на карте и нужно об этом сообщить другим игрокам, находящимся в зоне видимости:<br />
<br />
<pre name="code" class="brush: csharp">
private void MovePlayer(Player movingPlayer,float x, float y, float range)
{
//Установим игроку новые координаты
movingPlayer.X=x;
movingPlayer.Y=y;
foreach(Player player in players)
{
//сообщим всем игрокам в области видимости, что игрок меперестился
if (player.X<x+range && player.X>x-range
&& player.Y<y+range && player.Y> y+ range)
{
player.SendMessage(Message.Move,movingPlayer);
}
}
}
</pre>
<br />
Как видно, две этих функции очень похожи, отличаются по сути только телом цикла внутри условия, а каждый раз писать длинные, совершенно одинаковые условия проверки очень не хочется. И ведь это еще весьма упрощенный вариант, приведенный для примера, а в реальном многопоточном сервере должны быть блокировки, вместо линейного списка игроков употребляться более эффективные структуры хранения данных и множество других дополнительных наворотов.<br />
Как решить данную задачу не копипастя постоянно один и тот же код? Делается это довольно просто используя класс дотнета Action<T>. <br />
<br />
Общий код (цикл и условие) выносим в отдельную функцию<br />
<br />
<pre name="code" class="brush: csharp">
private void PlayerActionInRange(float x,float y,float range,
Action<Player> playerAction)
</pre>
Как видно, в функции присутсвует параметр с типом Action<Player>. Класс Action является оберткой для делегата или просто говоря функции, которая будет вызываться из функции PlayerActionInRange. Параметром данной функции будет являться объект типа Player.<br />
<br />
<br />
<br />
<pre name="code" class="brush: csharp">
private void PlayerActionInRange(float x,float y,float range,
Action<Player> playerAction)
{
foreach(Player player in players)
{
//для всех игроков в области видимости вызываем функцию
if (player.X<x+range && player.X>x-range
&& player.Y<y+range && player.Y> y+ range)
{
//Параметром функции передаем текущего игрока в цикле
playerAction(player);
}
}
}
</pre>
Теперь модифицируем функцию ExplodeGrenade<br />
<br />
<pre name="code" class="brush: csharp">
private void ExplodeGrenade(float x, float y, float range)
{
//сохраним константу в локальную переменную, в реальности
//здесь может быть гораздо более сложный код
float damage=Grenade.Damage;
//В качестве параметра функции передаем ссылку на анонимный
//делегат, код которого будет находиться тут же
PlayerActionInRange(x,y,range,player =>
{
//фунция Distance считает расстояние между двумя точками
float playerDamage=damage/(Distance(player.X,player.Y,x,y)+1);
//пошлем игроку сообщение о том, что он получил урон
player.SendMessage(Message.TakeDamage,playerDamage);
});
}
</pre>
Теперь функция ExplodeGrenade стала гораздо более ясной для понимания. А представьте, если бы для организации выборки в реальном проекте пришлось бы писать не две, а с хотя бы десяток строчек кода, насколько упростился бы вид этой функции!<br />
<br />
Функцию MovePlayer можно сделать еще проще, воспользовавшись лямбда-выражениями:<br />
<pre name="code" class="brush: csharp">
private void MovePlayer(Player movingPlayer,
float x, float y, float range)
{
//Установим игроку новые координаты:
movingPlayer.X=x;
movingPlayer.Y=y;
//всего лишь одна строчка кода
PlayerActionInRange(x,y,range,
p=>p.SendMessage(Message.Move,movingPlayer));
}
</pre>
Вот таким нехитрым способом можно существенно упростить читаемость кода и избежать ошибок при копировании одинакового текста из одной фунции в другую
<br /></div>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-25999147040081303152013-01-21T01:49:00.001-08:002013-01-21T01:49:17.756-08:00Реклама в приложении от креара медиа<div dir="ltr" style="text-align: left;" trbidi="on">
Летом ко мне обращались многие рекламщики с предложением вставить в мою вконтактовскую игру рекламу. Я не собирался вставлять рекламу, которую не могу модерировать, т. к. велика вероятность того, что в приложении прорвется реклама, запрещенная правилами вконтакте и приложение будет забанено.<br /> Одной из обратившихся компаний была креара медиа. Т. к. компания является официальным партнером, то проблем с рекламным контентом не должно было быть, поэтому я у них запросил информацию о том, сколько я смогу зарабатывать на их рекламе. Они мне прислали усредненную статистику, я сделал расчеты и насчитал, что с каждых 10 тыс. уникальных пользователей я буду зарабатывать на рекламе аж целых 90 рублей в день. <br /> Возникло предположение, что я ошибся. Написал менеджеру креары вопрос, правильно ли я все посчитал, но ответа так и не получил, поэтому на креару забил.<br /> Через полгода решил проверить в реальности и подключить для теста прелоадеры рекламы. После трех дней тестов у меня получились очень интересные цифры:<br /> Прелоадер отображается примерно каждому десятому пользователю, т. е. на 1000 DAU имеем 100 показов.<br />
CTR около 3% (2.83)<br /> Оплата за каждый клик составляет примерно 1 руб. плюс за каждую 1000 показов - 6 рублей<br /> В итоге получаем, что на каждую 1000 DAU заработок на рекламе получается:<br /> 1000*0.1*0.03*1+1000*0.1*6/1000=3 рубля 60 копеек. Соответсвенно со 100 тыс. DAU можно заработать 360 рублей в день. Просто фантастический доход! <br /> Для небольших приложений с DAU меньше 10 тыс. зайти самому один раз в день и прокликать ссылки оказывается гораздо выгоднее, чем получать деньги с рекламы, которая находится в приложении - это просто какой-то нонсенс.<br /> Я даже не говорю про то, что деньги от креары можно получить не ранее чем через 45 дней, после того, как они начислены и для получения этих копеек они требуют открыть публичную статистику приложения.</div>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-51172692948056268242013-01-20T04:22:00.000-08:002014-01-20T08:39:30.467-08:00MonoDevelop на Ubuntu 10.04<div dir="ltr" style="text-align: left;" trbidi="on">
Что делать, если надо поставить на Ubuntu последнюю версию MonoDevelop, а в официальных репозиториях ее нет? <br />
<br />
Для начала стоит глянуть в полуофициальный репозиторий <a href="http://badgerports.org/">badgerports</a> - там, обычно, с небольшой задержкой после выпуска новых версий mono и monodevelop появляются пакеты для LTS версий Ubuntu, но, к сожалению, после выхода Ubuntu 12.04, автор прекратил обновлять пакеты для Lucid Lynx, а мне нужно было установить MonoDevelop именно на 10.04. <br />
<br />
Поэтому, решил откомпилировать MonoDevelop из исходников. Если честно, то воспоминания о том сколько приходилось мучиться несколько лет назад, чтобы собрать MonoDevelop (а точнее, пререквизиты к нему), наводили на мысль, что данная затея может быть безуспешной. Но, как оказалось, все получилось довольно просто.<br />
<br />
Сначала ставим Mono 2.10.8 из <a href="http://badgerports.org/">badgerports</a>. Процесс подключения репозитория описан на сайте, поэтому тут повторять его не буду. Также установливаем MonoDevelop 2.8.<br />
<br />
Устанавливаем пререквизиты:<br />
<pre><code>sudo aptitude install intltool libmono-addins-cil-dev libmono-addins-gui-cil-dev gnome-sharp2</code></pre>
Хочу обратить внимание на пакеты<code> libmono-addins-cil-dev</code><code> libmono-addins-gui-cil-dev </code>Если их не установить, то будут возникать ошибки на несоответствии версии mono-addins. <br />
Дальше забираем исходники из последнего стабильного бранча: <br />
<pre><code>git clone -b monodevelop-3.0-series https://github.com/mono/monodevelop.git</code></pre>
Собираем:<br />
<pre><code>./configure
make</code></pre>
Проверяем работоспособность:<br />
<pre><code>make run
</code></pre>
Если все нормально, можно проинсталлировать:<br />
<pre><code>make install</code></pre>
<br />
Вот, собственно, и все.</div>
Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-34438038058595731092012-07-06T10:37:00.001-07:002014-01-20T08:48:37.953-08:00VmWare Player could not open /dev/vmmon<div dir="ltr" style="text-align: left;" trbidi="on">
Сегодня моргнул свет и девелоперская машина перезагрузилась. После этого VmWare Player перестал запускать виртуальную машину с svn, выдавая следующее сообщение:<br /><br />
<pre class="bb-code-block">Could not open /dev/vmmon: No such file or directory.
</pre>
<div style="text-align: left;">
Помогла перекомпиляция модулей.<br /><br /> sudo rm /lib/modules/2.6.32-41-generic-pae/misc/*<br />sudo vmware-modconfig –console –install-all --appname="VMware Player" --icon="vmware-player"<br /></div>
<br /></div>Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-40938024916662809072012-07-05T08:47:00.001-07:002012-07-05T08:47:28.200-07:00VmWare Server умер...<div dir="ltr" style="text-align: left;" trbidi="on">
Наверное, вчера был последний день, когда я использовал VmWare сервер. Он долго крутился, до тех пор пока после очередного обновления Ubuntu 10.04 до ядра 2.6.32-41 Web Access перестал запускаться. Все попытки реанимировать этого динозавра ни к чему не привели.<br />
С одной стороны обидно, что он умер на практически ровном месте, а с другой стороны столько долго работать с софтом, который уже несколько лет не поддерживается производителем - моветон. <br />
Радует хоть то, что почти все важные сервисы я давно перевел с vmware виртуалок в Xen, поэтому непредвиденной срочной миграции делать не пришлось. Оперативно перетаскивать пришлось только один сервис, который раздавал файлы кроссдомена пользователям сайта. <br /> Из важного же остался svn, да девелоперские базы данных, но они, в принципе, смогут временно покрутиться и в VmWare Player на девелоперской машине, пока не перенесу их в другое место.<br />
<br />
Зато теперь столько оперативки освободится! </div>Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-25095804848728145902012-07-04T07:11:00.000-07:002012-07-04T07:11:32.964-07:00Bash и кавычки<div dir="ltr" style="text-align: left;" trbidi="on">
Довольно редко приходится писать что-то с использованием bash, но вот вчера возникла такая необходимость. И оказалось, что в bash совсем нетривиально передать в командную строку символ двойных кавычек ". Как ни пробовал (с экскейп-символами, с заворачиванием строку в одинарные кавычки и т. д.) - никак не получалось, только один способ помог:<br />
<br />agent="my user agent"<br />cmd="wget http://mysite.com -U $agent"<br />eval $cmd</div>Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-24401120068647118682012-06-19T05:42:00.001-07:002012-06-19T05:45:07.342-07:00МТС коннект<div dir="ltr" style="text-align: left;" trbidi="on">
Зарекался я никогда не связываться с МТС, потому что хуже оператора по отношению к клиентам не найти... А тут пришлось покупать 3G модем, чтобы мониторить работоспособность сервисов, будучи за городом. Так как у Мегафона карта покрытия 3G вообще на сайте не отображается (сервер не работает) и это уже не в первый раз, а у МТС на карте как раз покрывается регион, куда я еду, то я решил взять модем МТСа. Почему Билайн не стал смотреть - не знаю, наверное потому что мне сказали, что у МТС там все работает нормально.<br />
<br />
Купил модем и в очередной раз убеждаюсь, что лучше бы с этим говнооператором не связывался.<br />
Во-первых, модем у меня так и не запустился на линуксовом нетбуке с Ubuntu 10.04. Если не смогу настроить за сегодня, то придется с собой тащить здоровый виндовый ноут, чего бы делать не хотелось....<br />
Во-вторых, на винде, установка прошла тоже не гладко, установщик повесил мне намертво какую-то системную программу и при этом завис сам. Но после все же установился. <br />
<br />
Но самое главное, я купил модем на <b>14 мегабит</b>, замерил скорость и оказалось, что до города, где я живу скорость скачки <b>1.5Мбит/с</b>, а до Москвы - <b>2 Мбит/сек</b>. И это при том, что я нахожусь в том месте, где покрытие 3G очень плотное. Какого хрена эти мудаки продают продукт, со свойствами в 10 раз хуже обещанных??? <br />
<br />
А ведь еще, в качестве опции, можно было доплатить 350 рублей и радоваться скоростью в полтора мегабита на модеме 21.6!<br />
<br /></div>Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0tag:blogger.com,1999:blog-5803724860059155983.post-12872035293259136522012-06-16T05:46:00.000-07:002014-01-20T08:41:16.560-08:00Redis в жизни<div dir="ltr" style="text-align: left;" trbidi="on">
Итак, первый, а точнее второй пост (первый был про то, как я все это ненавижу) будет о редисе. Нет, не о том редисе, который выращивают бабушки на грядках своих полуразрушенных дачных участков и продают потом нам у метро, а о современной NoSQL базе данных, которая помогает обеспечить быстрый доступ до часто изменяемых/требуемых данных. <br />
<br />
Для тех кто не знает, Redis позволяет легко организовать out of process memory cache и, помимо этого, имеет возможность сохранять данные на жесткий носитель, что превращает его из обычного кэша в полноценную базу данных. Так как у редиса достойная производительность, то для своего проекта, где требуется хранить миллионы key-value записей я выбрал его. Я не админ, поэтому если мне хочется поставить какой-либо софт, то чтобы он сразу работал и забыть про него насовсем. И вроде бы redis это позволяет сделать: легко устанавливается, в процессе работы не падает, но... как всегда есть свои тонкости, которые не дают спокойно жить.<br />
<br />
У редиса есть три возможности работать с данными: in-memory, когда на диск ничего не созраняется, rdb - когда раз в x секунд вызывается функция сохранения всех данных на диск и aof - когда измененные данные постоянно дописываются в журнал на диск в режиме append, а потом проигрываются по журналу, в случае, например, сбоя.<br />
<br />
Я выбрал режим rdb - просто потому что он был проще в понимании и, к тому же, в документации не было сносок, как про aof, что в некоторых случаях все может накрыться.<br />
<br />
Выбрал и выбрал. Настроил чтобы раз в 5 минут база пересохранялась (мне не сильно страшно, если 5 минут последних изменений пропадут в случае каких-либо проблем типа отключения питания), сделал регулярные бэкапы rdb файла на другую машинку ну и вроде можно забыть... но оказалось, что нет!<br />
<br />
Итак, проблема номер 1. База постоянно растет в размере (что ожидаемо), скорость роста базы не сильно высока и ожидать заполнения 8-ми гигов оперативной памяти базой следует месяцов эдак через 6. Вроде все время можно отдыхать, пока не будем подходить к верхней планке, но не тут было!<br />
На машине 8GB, текущий размер базы стал примерно 3.7GB. Посмотреть размер базы можно с помощью команды INFO в клиенте редиса redis-cli. Там выводятся следующие параметры:<br />
<br />
<b>used_memory</b> - сколько памяти редис навыделял для базы<br />
<b>used_memory_rss</b> - сколько редис отъедает памяти у OS (эта цифра больше, из-за фрагментации памяти)<br />
<b>used_memory_peak</b> - хрен бы знал что это значит, потому что цифра всегда меньше, чем used_memory <br />
Так вот, как только редис отожрал примерно половину памяти операционки он перестал сохранять данные на диск. Заметил я это не сразу, а через пару дней, когда машина перезагрузилась из-за проблем с электричеством. Офигенно! Естественно, все бэкапы никак помочь не могли, потому что в них просто не было данных, которые остались в оперативке.<br />
<br />
В общем, стал разбираться, и выяснилось следующее. Redis использует fork() в операционной системе, для того, чтобы сделать полную копию текущего процесса и в бэкграунде сохранить данные. А fork работает хитрым образом, допустим есть 4GB данных - он не копирует полностью страницы памяти в новые, а просто оставляет ссылки, и новые страницы памяти выделяет только в том случае, если в родительском или порожденном процессе с этими страницами произошли какие-то изменения. Так как редис второй процесс использует для чтения данных, то речь идет об измененных данных в родительском процессе. Получается, дополнительная память расходуется только в том случае, когда клиенты редиса меняют значения ключей в процессе сохранения на диск. <br />
<br />
Но у меня-то данных меняется не так-то уж и много за то время, пока файл сохранялся, памяти должно хватать, а редис настойчиво пишет "could not allocate memory" в логах. Почему?<br />
<br />
Стал смотреть дальше, оказывается, OS тоже не проста и очень интересно работает с fork(). Вот пример: база данных 4.1GB, в процессе сохранения изменяется 10MB даных, соответсвенно пик использования памяти во время fork() должен быть 4.11GB. Хватит ли оперативной памяти размером<br />
8GB? При дефолтных настройках получается, что не хватит. А все потому, что OS до начала fork не знает, сколько реально памяти потребуется процессам, и предполагает самый худший вариант, когда все страницы придется продублировать. Ну а так как 8.2 GB уже не помещаются в оперативку, то операционка отказывается начинать делать fork() c out of memory. <br />
<br />
К счастью, тут OS удается обмануть. Есть две настройки: <b>vm.overcommit_memory</b> и <b>vm.overcommit_ratio</b>. Посмотреть текущие значения можно либо командой<br />
<b>sysctl vm.overcommit_ratio</b><br />
либо<br />
<b>cat /proc/sys/vm/overcommit_ratio</b><br />
<br />
Эти настройки позволяют затюнить OS так, чтобы она не отказывала в выделении памяти, даже если ее реально нет. Для это значение overcommit_memory должно быть установлено в 2. Тогда, OS будет считать, что количество памяти зависит от overcommit_ratio. Вот формула, по которой высчитывается доступная память в зависимости от overcommit_ratio: <br />
<pre> </pre>
<div style="text-align: left;">
<i>allocatable memory=(swap size + (RAM size * overcommit ratio))</i></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
overcommit_ratio здесь указывается в процентах. </div>
<br />
Так как swap у меня небольшой по размеру, то я в overcommit_ratio указал значение 200, что увеличило доступную память чуть более чем в два раза.<br />
<br />
сделать это можно либо командой sysctl, либо изменив /ect/sysctl.conf<br />
<br />
<br />
<b>sysctl -w vm.overcommit_memory=2</b><br />
<b>sysctl -w vm.overcommit_ratio=200</b><br />
<br />
Но редис все равно приходится постоянно мониторить на предмет сохранения данных. Команда <b> </b><br />
<b>tail -f /var/log/redis_6379.log | grep save </b><br />
помогает в этом и если нет кучи записей с ошибками выделения памяти, то вроде все нормально.<br />
<br />
Подробнее про настройки виртуальной памяти можно прочитать <a href="http://www.redhat.com/magazine/001nov04/features/vm/">тут</a> (с одной поправкой: в статье перепутали значения 1 и 2 для vm.overcommit_memory)<br />
<br />
Проблема номер 2. Я пока еще не оценил по достоинству данную проблему, но, думаю, в скором времени она доставит много неприятностей. Дело в том, что Redis очень медленно делает fork, когда работает под Xen! При сохранении 7-гигового сета время форка занимает около секунды. (Узнать время, которое тратится на fork() можно воспользовавшись командой Redis INFO и смотреть параметр latest_fork_usec - показывает время форка в микросекундах). А это может создать очень много неприятностей, так как в течении форка редис блокируется и становится недоступным для клиентов. Представьте, когда в высоконагруженной среде с тысячами одновременно работающих клиентов в течении секунды нельзя обратиться к кэшу. Как <a href="http://redis.io/topics/latency">пишут</a> разработчики redis, это проблема Xen, так как нигде больше не наблюдается. <br />
<br />
Может спросить, почему на Xen свет клином сошелся? Да хотя бы просто потому, что у меня на сервере стоит xen, под которым крутятся разные виртуалки, а кроме того почти все облака так или иначе построены на технологии Xen.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br /></div>Anonymoushttp://www.blogger.com/profile/15995362150377842185noreply@blogger.com0